Who are some of the best writers in history?

1 Crux 10 August 2013 09:04AM

We currently have a thread in progress concerning the greatest philosophers in history. This reminded me of a question I've been considering for a while.

Recently I realized that although I've spent a massive percentage of my time for the past few years reading, it's almost always been non-fiction, for example posts on Less Wrong, or posts on other forums or blogs, or old treatises on economics or philosophy. Although I feel as if I've stumbled onto some of the absolute best sources of insight the history of civilization has to offer, and some of the most brilliant thinkers to have ever walked the planet, it strikes me that the main optimization criteria that led to the popularity of most of these pieces of writing and most of these thinkers, and thus the reason they were visible enough for me to run into them, has been a matter not necessarily of the quality of writing style, but instead a matter of the level of insight perceived by whatever critical mass of people brought it to the forefront.

In other words, I've mostly ran into articles and books that have garnered their level of fame not by the eloquence of their prose, but instead by the insight contained within. I've ran into plenty of famous, highly insightful thinkers who just aren't very good at writing. They became famous for another reason: the ideas they brought to the table, however incompetently. This is in utter contrast to another section of the history of writing: what we call "fiction", and in some cases "poetry". Although fiction writers or poets are generally expected to bring some sort of insight to the table, or even a lot of insight, this certainly isn't the overwhelming criterion determining their fame. Most of the scientists who have become famous for their work have been at least decent at writing, or nobody would have been able to get through their stuff (though there are exceptions). In the same way, the great fiction books of history certainly have insight; it's just often not what carries them to success.

In non-fiction, the top contenders are usually there for their insight, though their writing is usually at least decent. And in fiction and poetry, the top contenders are usually there for their eloquence and writing style, though their insight is usually at least decent. There's at least one exception I can think of, where this person seems to be civilization class in both insight and writing style: David Hume. One of the greatest thinkers in history, and certainly also one of the greatest writers in history. Anybody who's read a decent amount of other famous writers and thinkers, and understands some of Hume's key arguments, would at least have some sort of sympathy for that characterization, despite the extreme level of praise I'm bestowing onto his work.

So here's my point, and my question: I'm mostly interested in insight, but I'm also interested in communicating this insight in an extremely effective way, with the most solid prose possible, and the highest level of eloquence that can be attained. For this, I can't simply limit myself to reading the most insightful, revolutionary non-fiction work, as the market test that led to the widespread adoption of this work was not necessarily one of requiring a high level of eloquence or poetic ability. I'll need to read works that became famous for their writing style. So we come now to the question: Who are some of the best writers in the history of civilization? Who should I read for the purpose of getting better at writing?

(TL;DR: I've spent a lot of time reading non-fiction books, but most of these books became famous for their level of insight, and not necessarily because of any sort of high level of competence in writing, or eloquence of style. I want to get better at writing, so although I've ignored fiction for a long time due to thinking my interest was scientific insight and not fiction stories, I've changed my mind, and realized I should probably be reading some fiction in order to learn from the masters of eloquence. So with that said, here's my question: Who are some of the best writers in history?)

Al Jazeera: "Engineering Human Evolution" -- 0h:35m:41s Youtube Video.

0 Logos01 29 March 2012 03:00AM

Link here.

Al Jazeera website link for the video disinclined.

A brief synopsis from the Al Jazeera website:

Cyborgs, brain uploads and immortality - How far should science go in helping humans exceed their biological limitations? These ideas might sound like science fiction, but proponents of a movement known as transhumanism believe they are inevitable.

In this episode of The Stream, we talk to bioethicist George Dvorsky; Robin Hanson, a research associate with Oxford’s Future of Humanity Institute; and Ari N. Schulman, senior editor of The New Atlantis, about the ethical implications of transhumanism.

 

Discuss below.

Global warming is a better test of irrationality that theism

-2 Stuart_Armstrong 16 March 2012 05:10PM

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

Theism is a symptom of excess compartmentalisation, of not realising that absence of evidence is evidence of absence, of belief in belief, of privileging the hypothesis, and similar failings. But these are not intrinsically huge problems. Indeed, someone with a mild case of theism can have the same anticipations as someone without, and update their evidence in the same way. If they have moved their belief beyond refutation, in theory it thus fails to constrain their anticipations at all; and often this is the case in practice.

Contrast that with someone who denies the existence of anthropogenic global warming (AGW). This has all the signs of hypothesis privileging, but also reeks of fake justification, motivated skepticism, massive overconfidence (if they are truly ignorant of the facts of the debate), and simply the raising of politics above rationality. If I knew someone was a global warming skeptic, then I would expect them to be wrong in their beliefs and their anticipations, and to refuse to update when evidence worked against them. I would expect their judgement to be much more impaired than a theist's.

Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational. I work in England, in a university environment, so my acceptance of AGW is the default position and not a sign of rationality. But if someone is in a milieu that discouraged belief in AGW (one stereotype being heavily Republican areas of the US) and has risen above this, then kudos to them: their acceptance of AGW is indeed a sign of rationality.

Journal of Consciousness Studies issue on the Singularity

14 lukeprog 02 March 2012 03:56PM

...has finally been published.

Contents:

The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & BostromIgor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.

McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.

Shit Rationalists Say?

30 Eneasz 25 January 2012 09:51PM

I assume everyone has run across at least one of the "Shit X's Say" format of videos? Such as Shit Skeptics Say. When done right it totally triggers the in-group warm-fuzzies. (Not to be confused with the nearly-identically formatted "Shit X's Say to Y's" which is mainly a way for Y's to complain about X's).

What sort of things do Rationalists often say that triggers this sort of in-group recognition which could be popped into a short video? A few I can think of...

You should sign up for cryonics. I want to see you in the future.

…intelligence explosion…

What’s your confidence interval?

You know what they say: one man’s Modus Ponens is another man’s Modus Tollens

This may sound a bit crazy right now, but hear me out…

What are your priors?

When the singularity comes that won’t be a problem anymore.

I like to think I’d do that, but I don’t fully trust myself. I am running on corrupted hardware after all.

I want to be with you, and I don’t foresee that changing in the near future.

…Bayesian statistics…

So Omega appears in front of you…

What would you say the probability of that event is, if your beliefs are true?

 

Others?

More intuitive explanations!

22 XiXiDu 06 January 2012 06:10PM

The post on two easy to grasp explanations on Gödel's theorem and the Banach-Tarski paradox made me think of other explanations that I found easy or insightful and that I could share them as well.

1) Here is a nice proof of the Pythagorean theorem:

2) An easy and concise explanation of expected utility calculations by Luke Muehlhauser:

Decision theory is about choosing among possible actions based on how much you desire the possible outcomes of those actions.

How does this work? We can describe what you want with something called a utility function, which assigns a number that expresses how much you desire each possible outcome (or “description of an entire possible future”). Perhaps a single scoop of ice cream has 40 “utils” for you, the death of your daughter has -⁠274,000 utils for you, and so on. This numerical representation of everything you care about is your utility function.

We can combine your probabilistic beliefs and your utility function to calculate the expected utility for any action under consideration. The expected utility of an action is the average utility of the action’s possible outcomes, weighted by the probability that each outcome occurs.

Suppose you’re walking along a freeway with your young daughter. You see an ice cream stand across the freeway, but you recently injured your leg and wouldn’t be able to move quickly across the freeway. Given what you know, if you send your daughter across the freeway to get you some ice cream, there’s a 60% chance you’ll get some ice cream, a 5% your child will be killed by speeding cars, and other probabilities for other outcomes.

To calculate the expected utility of sending your daughter across the freeway for ice cream, we multiply the utility of the first outcome by its probability: 0.6 × 40 = 24. Then, we add to this the product of the next outcome’s utility and its probability: 24 + (0.05 × -⁠274,000) = -⁠13,676. And suppose the sum of the products of the utilities and probabilities for other possible outcomes was 0. The expected utility of sending your daughter across the freeway for ice cream is thus very low (as we would expect from common sense). You should probably take one of the other actions available to you, for example the action of not sending your daughter across the freeway for ice cream — or, some action with even higher expected utility.

A rational agent aims to maximize its expected utility, because an agent that does so will on average get the most possible of what it wants, given its beliefs and desires.

3) Micro- and macroevolution visualized.

4) Slopes of Perpendicular Lines.

5) Proof of Euler's formula using power series expansions.

6) Proof of the Chain Rule.

7) Multiplying Negatives Makes A Positive.

8) Completing the Square and Derivation of Quadratic Formula.

9) Quadratic factorization.

10) Remainder Theorem and Factor Theorem.

11) Combinations with repetitions.

12) Löb's theorem.


 

Open Problems Related to the Singularity (draft 1)

39 lukeprog 13 December 2011 10:57AM

"I've come to agree that navigating the Singularity wisely is the most important thing humanity can do. I'm a researcher and I want to help. What do I work on?"

The Singularity Institute gets this question regularly, and we haven't published a clear answer to it anywhere. This is because it's an extremely difficult and complicated question. A large expenditure of limited resources is required to make a serious attempt at answering it. Nevertheless, it's an important question, so we'd like to work toward an answer.

continue reading »

Singularity Institute mentioned on Franco-German TV

10 XiXiDu 07 November 2011 02:14PM

The following is a clipping of a documentary about transhumanism that I recorded when it aired on Arte, September 22 2011.

At the beginning and end of the video Luke Muehlhauser and Michael Anissimov give a short commentary.

Download here: German, French (ask for HD download link). Should play with VLC player.

Sadly, the people who produced the show seemed to be somewhat confused about the agenda of the Singularity Institute. At one point they seem to be saying that the SIAI believes into "the good in the machines", adding "how naive!", while the next sentence talks about how the SIAI tries to figure out how to make machines respect humans.

Here is the original part of the clip that I am talking about:

In San Francisco glaubt eine Vereinigung ehrenamtlicher junger Wissenschaftler dennoch an das Gute im Roboter. Wie naiv! Hier im Singularity Institute, dass Kontakte zu den großen Unis wie Oxford hat, zerbricht man sich den Kopf darüber, wie man zukünftigen Formen künstlicher Intelligenz beibringt, den Menschen zu respektieren.

Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.

My translation:

In San Francisco however, a society of young voluntary scientists believes in the good in robots. How naive! Here at the Singularity Institute, which has a connection to big universities like Oxford, they think about how to teach future artificial intelligences to respect humans.

I am a native German speaker by the way, maybe someone else who speaks German can make more sense of it (and is willing to translate the whole clip).

Selection Effects in estimates of Global Catastrophic Risk

22 bentarm 04 November 2011 09:14AM

Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year). 

It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war. 

It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?

Satisficers want to become maximisers

21 Stuart_Armstrong 21 October 2011 04:27PM

(with thanks to Daniel Dewey, Owain Evans, Nick Bostrom, Toby Ord and BruceyB)

In theory, a satisficing agent has a lot to recommend it. Unlike a maximiser, that will attempt to squeeze the universe to every drop of utility that it can, a satisficer will be content when it reaches a certain level expected utility (a satisficer that is content with a certain level of utility is simply a maximiser with a bounded utility function). For instance a satisficer with a utility linear in paperclips and a target level of 9, will be content once it's 90% sure that it's built ten paperclips, and not try to optimize the universe to either build more paperclips (unbounded utility), or obsessively count the ones it has already (bounded utility).

Unfortunately, a self-improving satisficer has an extremely easy way to reach its satisficing goal: to transform itself into a maximiser. This is because, in general, if E denotes expectation,

E(U(there exists an agent A maximising U))  ≥  E(U(there exists an agent A satisficing U))

How is this true (apart from the special case when other agents penalise you specifically for being a maximiser)? Well, agent A will have to make decisions, and if it is a maximiser, will always make the decision that maximises expected utility. If it is a satisficer, it will sometimes not make the same decision, leading to lower expected utility in that case.

So hence if there were a satisficing agent for U, and it had some strategy S to accomplish its goal, then another way to accomplish this would be to transform itself into a maximising agent and let that agent implement S. If S is complicated, and transforming itself is simple (which would be the case for a self-improving agent), then self-transforming into a maximiser is the easier way to go.

So unless we have exceedingly well programmed criteria banning the satisficer from using any variant of this technique, we should assume satisficers are as likely to be as dangerous as maximisers.

Edited to clarify the argument for why a maximiser maximises better than a satisficer.

Edit: See BruceyB's comment for an example where a (non-timeless) satisficer would find rewriting itself as a maximiser to be the only good strategy. Hence timeless satisficers would behave as maximisers anyway (in many situations). Furthermore, a timeless satisficer with bounded rationality may find that rewriting itself as a maximiser would be a useful precaution to take, if it's not sure to be able to precalculate all the correct strategies.

View more: Next