Greatest Philosopher in History

1 Carinthium 09 August 2013 12:50PM

Since LessWrong is a major congregation point for certain philosophical ideas, and because people here tend to be more objective (in the sense of not being self-deluded) than elsewhere, I thought I'd ask people's views.

To be clear, by "Greatest Philosopher" I am referring not to the most correct philosopher in human history but the one who deserves the most credit for advancing human philosophy towards being more true.

Off the top of my head I would say that a prime candidate would be Hume- amongst other things he rejected the idea of a soul, realised to a much greater extent than his predecessors the limits of human knowledge, and opposed the idea that reason is somehow an objective force that can make priorities independent of emotions.

Aristotle deserves considerable credit relative for his time but doesn't make the list because although it wasn't his fault his ideas were dogmatically accepted and held back both science and philosophy later on.

Your thoughts?

[LINK] XKCD Comic #1236, Seashells and Bayes' Theorem

-7 Petruchio 10 July 2013 11:05AM

A fun comic about seashells and Bayes' Theorem. http://xkcd.com/1236/

HPMoR the Youtube Series! But in need of advice

-1 wobster109 09 July 2013 02:50PM

Hi Less Wrong! I was wondering if you have experience with video editing? I want to record footage and a soundtrack, and then overlay them on each other, and I'll also need to be able to do special effects, such as to float someone in the air. Is there a video editing program that you'd recommend?

 

Edit - Please let me know if you'd like to act in it and are able to get to Madison, WI on weekends :)

Karma as Money

0 diegocaleiro 02 June 2013 01:46AM

How do you gather a theory of Counterfactuals, Karma, and Economics, into a revised algorithm for thinking about Lesswrong?

Thinking of Karma as money. 

There are a lot of things that one may consider worth saying on Lesswrong. Things that go against the agenda, things that may make people unconfortable, things that are different from what the high-ranking officials would prefer to read here. But we don't do it, because we don't want to "loose" precious Karma points. Each Karma point loss is felt as an insecurity, as a tiny arrow penetrating the chest.  But should it be that way? 

Here is the alternative: Think of Karma as money. You work hard for getting a few karma points by writing interesting stuff on superintelligence and whatnot, society rewards you by paying some karma points. Then you go there and write something you think people need to hear, but will downvote for sure, at least initially. Some people by now will be very rich, which affords them the opportunity of saying a lot of things that they are not sure will get themselves upvoted, but are sure should be posted.

Citizen: Wait, you said counterfactuals...

Yes, just like your State doesn't really care or like you going out in your hovercraft through the river and using equipment to climb a mountain, so the people here may not care about putting attention into that idea which you think they should hear. Thus, they dowvote it. They make you pay for their attention. If you mentalize it as "they are drawing my soul and life is worthless if karma is negative", then you are much less likely to end up posting something controversial that may be counterfactually relevant

Just like efficient charity donation works because the vast majority of people are not paying to effectively cause others into being happier, using karma as money works because the vast majority of people are afraid their soul is being sucked every time a downvote comes. But it isn't, this is just the price people charge for their attention, if you think the way I'm tentatively suggesting.  It is just a test worth trying, not necessarily something that I fully endorse. I like the idea, and have been using it since forever. Every post linked here, or an earlier subpart of it, has been negative at some point, and from before posting, I knew it would be a "costly one".  Try it, if you are rich, you may have nothing much to loose, and more controversial but useful stuff will show up with time.  

Let's see how much this costs. 

A cure for akrasia

-2 [deleted] 28 December 2012 07:11PM

Some of you guys have been a little down on philosophy articles lately. This article by Roy Sorensen appeared in Mind in 1997, and it is awesome, therefore all philosophy papers are awesome. 

 

Published in Mind 106/424 (October 1997) 743

A CURE FOR INCONTINENCE!

Tired of being weak-willed?  Do you want to end procrastination and back-sliding?  Are you envious of those paragons of self-control who always do what they consider best?

Thanks to a breakthrough in therapeutic philosophy, you too can now close the gap between what you think you ought to do and what you actually do.  Just send $1000 to the address below and you will never again succumb to temptation.  This is a MONEY-BACK GUARANTEE.  The first time you do something that you know to be irrational, your money will be refunded, no questions asked. Of course, you might nevertheless have some questions.  How can you act incontinently when you know that the "irrational" act will earn you a $1000 refund? Well, that's what's revolutionary in this new cure for incontinence.  

Old approaches focus on punishing the weak willed. This follows the antiquated behaviorist principle that negative reinforcement extinguishes bad behavior.  The new humanitarian approach rewards incontinence -- and lavishly at that.  The key is to make the reward so strongly motivating that an otherwise irrational act becomes rational.

Some may seek a refund on the grounds that the reward for incontinence played no role in their (apparently) incontinent act; although aware of the reward, they would have performed the act anyway.  These folks should distinguish between actual and hypothetical incontinence.  If you act in accordance with your judgement as to what is best overall, then you did nothing irrational.

True, the hypothetical incontinent act is a sign that you have a weak will.  But the presence of this disposition gives you all the more reason to block its manifestation -- by sending $1000. Granted, there are people who cannot be swayed from temptation by a mere $1000.  These recalcitrant individuals are advised to send in more than $1000.  Give until it hurts.

Rush your cheque to:

Dr. Roy Sorensen

Department of Philosophy

New York University

503 Main Building

100 Washington Square East

New York, New York 10003-6688

(Note, address is not current)

[Link] "An OKCupid Profile of a Rationalist"

-16 Athrelon 14 November 2012 01:48AM

The rationalist in question, of course, is our very own EY.

Quotes giving a reasonable sample of the spectrum of reactions:

Epic Fail on the e-harmony profile. He’s over-signalling intelligence. There’s a good paper about how much to optimally signal, like when you have a PhD to put it on your business card or not. This guy is going around giving out business cards that read Prof. Dr. John Doe, PhD, MA, BA. He won’t be getting laid any time soon.

His profile is probably very effective for aspergery girls who like reading the kinds of things that appear on LessWrong. Yudkowsky is basically a celebrity within a small niche of hyper-nerdy rationalists, so I doubt he has much trouble getting laid by girls in that community.

You make it sound like a cult leader or something....And reading the profile again with that lens, it actually makes a lot of sense.

I was about to agree [that the profile is oversharing], but then come to think of it, I realize I have an orgasm denial fetish, too. It’s an aroused preference that never escaped to my non-aroused self-consciousness.

Why is this important to consider? 

LessWrong as a community is dedicated to trying to "raise the sanity waterline," and its most respected members in particular put a lot of resources into outreach, via CFAR, HPMoR, and maintaining this site.  But a big factor in how people perceive our brand of rationality is about image.  If we're serious about raising the sanity waterline, that means image management - or at least avoiding active image malpractice - is something we should enthusiastically embrace as a way to achieve our goals. [1]

This is also a valuable exercise in considering the outside view.  Marginal Revolution is already a fairly WEIRD site, focused on abstract economic issues.  If any major blog is likely to be sympathetic to our cultural quirks, this would be it.  Yet a plurality of commenters reacted negatively. 

To the extent that we didn't notice anything strange about LW's figurehead having this OKCupid profile, LW either failed at calibrating mainstream reaction, or failed at consequentialism and realizing the drag this would have on our other recruitment efforts.  In our last discussion, there were only a few commenters raising concerns, and the consensus of the thread was that it was harmless and had no PR consequences worth noting.

As one commenter cogently put it,

I’m not saying that he’s trying to make a statement with this, I’m saying that he is making a statement about this whether he’s trying to or not. Ideas have consequences for how we live our lives, and that Eliezer has a public, identifiable profile up where he talks about his sexual fetishes is not some sort of randomly occurring event with no relationship to his other ideas.

I'd argue the same reasoning applies to the community at large, not just EY specifically.

[1] From Anna's excellent article: 5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")

An Anthropic Principle Fairy Tale

-16 Nominull 28 August 2012 08:48PM

A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?

Towards Safe Robots: Approaching Asimov's 1st Law

-6 Utopiah 16 August 2012 08:53AM

Towards Safe Robots: Approaching Asimov's 1st Law

http://darwin.bth.rwth-aachen.de/opus3/volltexte/2011/3826/pdf/3826.pdf (via http://www.euron.org )

Despite the title very little theory or philosophy but instead a focus on interaction (e.g. in a factory, between a human worker and a robot) and how to minimize risk: soft-robotics, crash-testing, collisions, ...

Abstract
Up to now, state-of-the-art industrial robots played the most important role
in real-world applications and more advanced, highly sensorized robots were
usually kept in lab environments and remained in a prototypical stadium. Var-
ious factors like low robustness and the lack of computing power were large
hurdles in realizing robotic systems for highly demanding tasks in e.g. do-
mestic environments or as robotic co-workers. The recent increase in techno-
logy maturity finally made it possible to realize systems of high integration,
advanced sensorial capabilities and enhanced power to cross this barrier and
merge living spaces of humans and robot workspaces to at least a certain ex-
tent.


In addition, the increasing effort various companies have invested to realize
first commercial service robotics products has made it necessary to properly
address one of the most fundamental questions of Human-Robot Interaction:


How to ensure safety in human-robot coexistence?


Although the vision of coexistence itself has always been present, very little
effort has been made to actually enforce safety requirements, or to define safety
standards up to now.


In this dissertation, the essential question about the necessary requirements
for a safe robot is addressed in depth and from various perspectives. The ap-
proach taken here focuses on the biomechanical level of injury assessment, ad-
dressing the physical evaluation of robot-human impacts and the definition of
the major factors that affect injuries during various worst-case scenarios. This
assessment is the basis for the design and exploration of various measures
to improve the safety in human-robot interaction. They range from control
schemes for collision detection, and reaction, to the investigation of novel joint
designs. An in-depth analysis of their contribution to safety in human-robot
coexistence is carried out.


In addition to this “on-contact” treatment of human-robot interaction, the the-
sis proposes and discusses real-time collision avoidance methods, i.e. how to
design pre-collision strategies to prevent unintended contact. An additional
major outcome of this thesis is the development of a concept for a robotic co-
worker and its experimental verification in an industrially relevant real-world
scenario. In this context, a control architecture that enables a behavior based
access to the robot and provides an easy to parameterize interface to the safety
capabilities of the robot was developed. In addition, the architecture was ap-
plied in various other applications that deal with physical Human-Robot In
teraction as e.g. the first continuously brain controlled robot by a tetraplegic
person or an EMG2 controlled robot.


Generally, all aspects discussed in this thesis are fully supported by a variety
of experiments and cross-verifications, leading to strong conclusions in this
sensitive and immanently important topic. Several surprising and gratifying
results, which were registered in the robotics community to great interest, were
obtained.


In addition to the scientific output, the outcome of this thesis attracted also
significant public attention, confirming the importance of the topic for robotics
research.


The major parts and contributions of this thesis are described hereafter in more
detail. Furthermore, the resulting publications which are an outcome of the
work are cited."

View more: Prev