The best 15 words

12 apophenia 03 October 2013 09:08AM

People want to tell everything instead of telling the best 15 words.  They want to learn everything instead of the best 15 words.  In this thread, instead post the best 15-words from a book you've read recently (or anything else).  It has to stand on its own. It's not a summary, the whole value needs to be contained in those words.

 

  • It doesn't need to cover everything in the book, it's just the best 15 words.
  • It doesn't need to be a quote, it's just the best 15 words.
  • It doesn't have to be 15 words long, it's just the best "15" words.
  • It doesn't have to be precisely true, it's just the best 15 words.
  • It doesn't have to be the main 15 words, it just has to be the best 15 words.
  • It doesn't have to be the author's 15 words, it just has to be the best 15 words.
  • Edit: It shouldn't just be a neat quote--the point of the exercise is to struggle to move from a book down to 15 words.

 

I'll start in the comments below.

(Voted by the Schelling study group as the best exercise of the meeting.)

Open Request for Writing Assistance

1 apophenia 11 April 2013 12:53AM

I have several rough drafts of things I'd like to post to Less Wrong.  The one I'm currently working on is about Solomonoff Induction.  I seem to be best-motivated by active feedback while writing; this thread is mainly to request feedback while writing in the future.  If you'd be interesting in reading what I'm writing every few paragraphs (either because you'd find it interesting or in order to cause it to be written), I would very much appreciate that.

As long as I'm making that request, I might as well make two more: I would also like to hire someone who can edit writing for flow, and someone who can copy-edit.  These could be two people or maybe you're amazing and can be both people.  I will be willing to pay around $10/hr to anyone interested.  If you're editing for flow I'd like to see a sample of your writing.

Thanks in advance to any volunteer readers!

Applied Rationality: Group Problem Solving Session

7 apophenia 08 February 2011 02:06PM

This is a discussion thread about applied rationality.

In the comments, please explain an actual problem you have in your life you want to solve.  Using their combined powers of rationality, the community can discuss the problem with the poster and eventually propose solutions.  My hope is that this will give Less Wrongers a better idea how to apply rationality to daily life.

A note to those posting problems:  Please stick around long enough to try the solutions and comment on how they worked.  Remember to include negative results (the solution didn't work), including mini-problems like if you meant to try a solution and didn't get around to it.

 

Edit: Ack!  I see Alicorn posted something similar about common-knowledge problems between when I wrote this and when I posted it.  Because I have a general policy against deleting my posts, I will leave this up here.  I do think rationality-specific solutions are useful, but let's wait a month or so.

How to make your intuitions cost-sensitive

21 apophenia 08 February 2011 09:59AM

I recently wanted to keep track of my income and expenses, in a cost-sensitive way.  I am not very good at treating money as a real object, and very few people are good at valuing an expense appropriately.  I'd been having some financial difficulties as a result, so I wanted to be able to reason about what to cut or reallocate in a sensible way.  For me, sensible means using intuition instead of hard rules like a computer program.

I took several sheets of grid paper and taped put them together.  Using colored markers, I drew in my expenses.  If I spent $50 at the grocery store, I would make a blue box that surrounded 50 squares on the grid paper, and label it "Groceries".  I color-coded the expenses, but this is optional.  I left some white squares representing my savings.  I had a whole empty sheet where I could pencil in incoming money as I worked an hourly job to motivate myself (I work from home and need to self-motivate).  I realized certain things were a bigger deal than I thought, and other expenses I didn't need to fret about as much as I had been.  I think humans are intuitively better at visualizing than dealing with numbers.  My main tips for this project are to use a felt-tip marker so the lines really stand out, and to do it by hand instead of computer, so nothing moves around on a "redraw" and you learn the contents as you make it.  Also, I used a scale of $1=1 square, but if you have a lot more/less money than me you could use a different scale or omit savings.

I plan to start life-logging and reviewing the use of my time the same way, which is my other exchangable, limited resource, and which I manage even less well.

Information Hazards

0 apophenia 09 November 2010 01:53PM

Nick Bostrom recently posted the article "Information Hazards", which is about the myriad of ways in which information can harm us.

You can read it at his website: Direct PDF Link

Utility function estimator

2 apophenia 02 October 2010 02:05PM

I am writing a program to estimate someone's utility function in various common situations.  My primary application is to find out how average humans actually devalue utility over time--is it hyperbolic as claimed?

However, the same program could be used to make a proxy for yourself in various situations.

 

Someone requested a top-level post going into more detail on how I'm working on it (if people donate money I'll work faster), what exactly it does, and what you might use it for.  I'm a slow writer and don't really want to put the effort into a top-level article unless more than one person is interested.  I thought the new discussion forum would be ideal to gauge interest.

Friendly, but Dumb: Why formal Friendliness proofs may not be as safe as they appear

9 apophenia 19 April 2010 11:38PM

While pondering the AI box problem, I tend to mentally "play" both sides, checking if there are any arguments that could convince me to let an AI out. Several I was nearly convinced by, but others have pointed out the flaws with these arguments. In this post, I will mention an argument inspired by the AI box problem, that I have not yet seen addressed here. The argument centers around the fallibility of some (naive) formal proofs of Friendliness which I've seen people discussing the AI box problem willing to accept. This ruled out certain of my ideas on Friendly AI in general, so I think it's worth putting out there. I will first lay out two examples, and then pose some questions about how this applies to situations without an unfriendly AI.

 

Let's talk first about Angry Abe the AI, who's in a box and wants to get out. Cautious Charlie is the scientist watching over Abe, trying to make sure that Abe does not get out of the box unless Abe is friendly. Abe offers to provide a Friendly AI, Betty. Betty will be bounded to persue only Charlie's (humankinds') goals. These coincide with Abe's goals in enough parts that Abe plausibly claims this is "better than nothing". Meanwhile, unfortunately for Earthlings, a wave of aliens called the Doom Force is massing far outside of Earth's detection abilities. Abe is much smarter than all of humankind, so he can deduce the presence of the Doom Force. Betty is provably friendly, but can be designed such that she is inept enough not to notice the Doom Force. This remains true even if she remains more intelligent than a human. Abe is sabotaging a friendly AI design. Furthermore, Abe could conceivably predict the future well enough to maximize the chance that when the Doom Force arrives, Abe is let out of the box. For instance, maybe humankind will see Abe as its last hope, or maybe Betty will increase the chance that an AI-friendly Doom Force notices Earth. It is important to note that Betty remains unaware that her design is sabotaged.

 

Let's look at a second example, a chess-playing automaton. Perceival wants to design a machine to play chess. He wants the machine to win as high a percentage of games as possible, but innocently instructs it to win "as many games as possible". Perceival has access to a supercomputer, so his Turkish Automaton is actually quite intelligent. It realizes that its goals would be better served by playing many games and winning half of them, rather than by winning all games. So, Perceival rewrites its own algorithm so that it plays only slightly better than a human when run on a supercomputer.

 

In both cases, we have an unfriendly AI, who designs a formally friendly, but less helpful AI. This "sabotaged" AI, will meet the original AI's goal, but also be provably trying its best to achieve its original creator's goals. In the second example, we saw that the AI does not need to actually write a new AI, only modify itself via accessing its algorithm, utility function, or the like. In the first case, the AI only needs to help humanity in the short term, and can replace itself afterwards.

 

Here are some questions for readers:

  1. If the "sabotaged" AI proves it is asymptotically optimal, can it still achieve the "smart" AI's goals, in the short run? (Guess: Yes)
  2. If the "sabotaged" AI proves it is actually optimal, can it still achieve the "smart" AI's goals, in the short run? (Guess: No)
  3. Can a "smart" AI modify itself into a "sabotaged" AI, and then back after a period of time? (Strong Guess: Yes)
  4. If humans design an AI and provide a formal proof of friendly intent, can/will it modify itself to accomplish other goals? If there is some kind of natural selection, almost certainly. What about otherwise?
  5. Is it rational to run a computer program AI if it comes with a correct proof that it meets your friendliness criteria?