Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ygert 16 March 2014 10:04:49AM 0 points [-]

I have some rambling thoughts on the subject. I just hope they aren't too stupid or obvious ;-)

Let's take as a framework the aforementioned example of the last digit of the zillionth prime. We'll say that the agent will be rewarded if it gets it right, on, shall we say, a log scoring rule. This means that the agent is incentivised to give the best (most accurate) probabilities it can, given the information it has. The more unreasonably confident it is, the more it loses, and the same with underconfidence.

By the way, for now I will assume the agent fully knows the scoring rule it will be judges by. It is quite possible that this assumption raises problems of its own, but I will ignore them for now.

So, the agent starts with a prior over the possible answers (a uniform prior?), and starts updating itself. But it wants to figure out how long it will spend doing so, before it should give up and hand in for grading its "good enough" answer. This is the main problem we are trying to solve here.

In the degenerate case in which it has nothing else in the universe other than this to give it utility, I actually think it is the correct answer to work forever (or as long as it can before physically falling apart) on the answer. But we shall make the opposite assumption. Let's call the amount of utility lost to the agent as an opportunity cost in a given unit of time by the name C. (We shall also make the assumption that the agent knows what C is, at least approximately. This is perhaps a slightly more dangerous assumption, but we shall accept it for now.)

So, the agent want to work for as many units of time as it can before the marginal amount of extra utility it would earn from the scoring rule from the work of a unit time is less than C.

The only problem left is figuring out that margin. But, by the assumption that the agent knows the scoring rule, it knows the derivative of the scoring function as well. At any given point in time, it can figure out the amount of change to the potential utility it would get from the change to the probabilities it assigns. Thus, if the agent knows approximately the range in which it may update in the next step, it can figure out whether or not the next stage is worthwhile.

In other words, once it is close enough to the answer that it predicts that a marginal update would move it closer to the answer by an amount that gives less than C utility, it can quit, and not perform the next step.

This makes sense, right? I do suspect that this is the direction to drive at in the solution to this problem.

Comment author: blacktrance 03 March 2014 04:58:49AM 0 points [-]

It only shows percentages, not the number of upvotes and downvotes. For example, if you have 100% upvotes, you may not know whether it was one upvote or 20.

Comment author: ygert 03 March 2014 12:27:35PM *  2 points [-]

If a comment has 100% upvotes, then obviously the amount of upvotes it got is exactly equal to the karma score of the post in question.

Comment author: MathieuRoy 02 March 2014 01:44:51AM *  3 points [-]

I am doing a Youtube playlist of transhumanist songs (with a particular quote from each song). Since there's not a lot of these, I also put songs that are only somewhat transhumanist (frankly I'm shocked at the ratio of transhumanist songs to love songs). So do you have suggestions for songs that are somewhat related to transhumanism (and/or rationality) (not necessarily in English) please?

For example, here are the ones that I have put in the playlist so far:

Turn It Around by Tim McMorris

Have you ever looked outside and didn’t like what you see

Or am I the only one who sees the things we could be

If we made more effort, then I think you’d agree

That we could make the world a better place, a place that is free

Another one is Hiro by Soprano: a song about someone who's saying what he would do if he could travel back in time. (it’s in French but with English subtitles) (it's inspired from the TV show Heroes which I also recommend).

Tellement de choses que j’aurais voulu changer ou voulu vivre (So many things that I would change or live)

Tellement de choses que j’aurais voulu effacer ou revivre (So many things that I would erase or live again)

The classic Imagine by John Lennon

Imagine there's no countries

It isn't hard to do

Nothing to kill or die for

And no religion too

Imagine all the people

Living life in peace…

The Future Soon by Jonathan Coulton

Well it's gonna be the future soon

And I won't always be this way

One that I saw recommended on LW: The Singularity by Dr. Steel (it's my favorite!)

Nanotechnology transcending biology

This is how the race is won

Another that I saw on LW: Singularity by The Lisps

You'd keep all the memories and feelings that you ever want,

And now you can commence your life as an uploaded extropian.

Singularity by Steve Aoki & Angger Dimas ft. My Name is Kay

We’re gonna live, we’ll never die

I am the very model of a singularitarian

I am a Transhuman, Immortalist, Extropian

I am the very model of a Singularitarian

Another World by Doug Bard

Sensing a freedom you've never known,

no limitation, only you can decide

Transhuman by Neurotech

The mutation is in our nature

Transhuman by Amaranthe

My adrenaline feeds my desire

To become an immortal machine

E.T. by Katy Perry ft. Kanye West

You're from a whole other world

A different dimension

You open my eyes

And I'm ready to go

Lead me into the light

Space Girl by Charmax

She told me never venture out among the asteroids, yet I did.

Comment author: ygert 03 March 2014 08:12:07AM 2 points [-]

In this writup of the 2013 Boston winter solstice celebration, there is a list of songs sung there. I would suggest this as a primary resource for populating your list.

Comment author: MathieuRoy 10 February 2014 04:58:14AM *  2 points [-]

What transhumanist and/or rationalist podcast/audiobook do you prefer beside hpmor which I just finished and really liked!!

Comment author: ygert 10 February 2014 12:41:42PM *  1 point [-]

As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.

I also would like to reiterate what I said on PredictionBook: I don't think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.

Comment author: blacktrance 07 February 2014 06:43:44PM *  4 points [-]

It would be convenient if, when talking about utilitarianism, people would be more explicit about what they mean by it. For example, when saying "I am a utilitarian", does the writer mean "I follow a utility function", "My utility function includes the well-being of other beings", "I believe that moral agents should value the well-being of other beings", or "I believe that moral agents should value all utility equally, regardless of the source or who experiences it"? Traditionally, only the last of these is considered utilitarianism, but on LW I've seen the word used differently.

Comment author: ygert 09 February 2014 05:43:43PM *  9 points [-]

Right. Many people use the word "utilitarianism" to refer to what is properly named "consequentialism". This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn't really work mathematically, if anyone wants me to explain further, I'll write a post on it.)

But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.

Comment author: hyporational 09 February 2014 05:15:57PM 0 points [-]

I try to use language economically, there's a precision trade-off. On a spectrum from centralized to decentralized, do you think it's more centralized now than it was in the middle ages?

Comment author: ygert 09 February 2014 05:31:23PM *  9 points [-]

In a sense, most certainly yes! In the middle ages, each fiefdom was a small city-state, controlling in its own right not all that much territory. There certainly wasn't the concept of nationalism as we know it today. And even if some duke was technically subservient to a king, that king wasn't issuing laws that directly impacted the duke's land on a day to day basis.

This is unlike what we have today: We have countries that span vast areas of land, with all authority reporting back to a central government. Think of how large the US is, and think of the fact that the government in Washington DC has power over it all. That is a centralized government.

It is true that there are state governments, but they are weak. Too weak, in fact. In the US today, the federal government is the final source of authority. The president of the US has far more power over what happens in a given state than a king in the middle ages had over what happened in any feudal dukedom.

Comment author: Emile 06 February 2014 08:59:46AM 0 points [-]

Presidential candidate interview setup that would have more of an impact:

Candidates present their program to a panel of experts (mostly economists, some foreign policy experts). The experts are then asked to give a probability of various future events (unemployment goes up/down, enter a new war, etc.) in 1, 2, 4, 10 years after the election, conditional on either candidate being elected. Some of the question are "standard", but some come from a poll of the public (or more exactly of people watching the show). Then after the election, the same experts are brought back and their past predictions are evaluated. The worst performers aren't invited back for the next pre-election show.

Comment author: ygert 06 February 2014 10:17:09AM *  1 point [-]

Or, prediction markets.

Same thing really, just cleaner and more elegant.

Comment author: adamzerner 28 January 2014 04:32:48PM 0 points [-]

I'm like 60% sure that its not that article I had in mind, but the idea is the same (incremental increases in rationality don't necessarily lead to incremental increases in winning), so I feel pretty satisfied regardless. Thanks!

Comment author: ygert 28 January 2014 10:34:22PM *  0 points [-]

Could the article you had in mind be this?

In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can't Cooperate.) It's an important point, regardless.

Comment author: Lumifer 22 January 2014 10:08:30PM 2 points [-]

The concept of an index fund is a tiny little piece of each and every thing that's on the market.

This is not true. An index fund holds a particular index which generally does not represent "every thing that's on the market".

For a simple example, consider the most common index -- the S&P 500. This index holds 500 largest-capitalization stocks in the US. If you invest in the S&P500 index you can be fairly described as investing into US large-cap stocks. The point is that you are NOT investing into small-cap stocks and neither you are investing in a large variety of other financial assets (e.g. bonds).

Comment author: ygert 28 January 2014 11:40:32AM 0 points [-]

Yes. What I wrote was a summery, and not as perfectly detailed as one may wish. One can quibble about details: "the market"/"a market", and those quibbles may be perfectly legitimate. Yes, one who buys S&P 500 indices is only buying shares in the large-cap market, not in all the many other things in the US (or world) economy. It would be silly to try to define a index fund as something that invests in every single thing on the face of the planet, and some indices are more diversified than others.

That said, the archetypal ideal of an index fund is that imaginary one piece of everything in the world. A fund is more "indexy" the more diversified it is. In other words, when one buys index funds, what one is buying is diversity. To a greater or lesser extent, of course, and one should buy not only the broadest index funds available, but of course also many different (non-overlapping?) index funds, if one wants to reap the full benifit of diversification.

Comment author: Lumifer 22 January 2014 09:51:54PM 0 points [-]

ordinary investors should use low fee index funds

Two questions:

  • Doesn't this ignore the very important question of "which indices?"

  • Is this advice different from the "hold a sufficiently diversified portfolio" one?

Comment author: ygert 22 January 2014 10:04:18PM *  1 point [-]

Not an economist or otherwise particularly qualified, but these are easy questions.

I'll answer the second one first: This advice is exactly the same as advice to hold a diversified portfolio. The concept of an index fund is a tiny little piece of each and every thing that's on the market. The reasoning behind buying index funds is exactly the reasoning behind holding a diversified portfolio.

For the second question, remember the idea is to buy a little bit of everything, to diversify. So go meta, and buy little bits of many different index funds. But actually, as this is considered a good idea, people have made such meta-index funds, that are indices of indices, that you can buy in order to get a little bit of each index fund.

But as an index is defined as "a little bit of everything", the question of which one fades a lot in importance. There are indices of different markets, so one might ask which market to invest in, but even there you want to go meta and diversify. (Say, with one of those meta-indices.) And yes, you want to find one with low fees, which invests as widely as possible, etc. All the standard stuff. But while fiddling with the minueta may matter, it does pale when compared to the difference between buying indices and stupidly trying to pick stocks yourself.

View more: Next