cousin_it

Wiki Contributions

Comments

Sure. But even then, trying looks very different from not trying. I bought a saxophone a few weeks ago and play it every day. Then I booked a lesson with a teacher. Well, I didn't start the lesson by asking "how do I get better at saxophone?" Instead I played a few notes and asked "Anything jump out at you? I'm not too happy that some notes come out flat, the sound isn't bright enough, anyway you see what I'm doing and any advice would be good." And then he gave me a lot of good advice tailored to where I am.

There's a great video from Ben Finegold making the same point in the context of learning chess. I like it so much I'm gonna quote the beginning:

"Is it possible to be good at chess when you start playing at 18?" Anything's possible. But if you want to get good at chess - I'm not saying you do, but if you do - then people who say things like "what opening should I play?" and "my rating's 1200, how do I get to 1400?" and you know, "my coach says this but I don't want to do that", or you know, "I lost five games in a row on chess.com so I haven't played in a week", those kind of things have nothing to do with getting better at chess. It's inquisitive of you, and that's what most people do. People who get better at chess play chess, study chess, think about chess, love chess and do chess stuff. People who don't get better at chess spend 90 percent of their energy thinking about "how do I get better" and asking questions about it. That's not how you get better at something. You want to get better at something, you do it and you work hard at it, and it's not just chess, that's your whole life.

Yeah. See also Stuart's post:

Expected utility maximization is an excellent prescriptive decision theory... However, it is completely wrong as a descriptive theory of how humans behave... Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behavior forces their utility to become far too concave.

cousin_itΩ120

Going back to the envelopes example, a nosy neighbor hypothesis would be "the left envelope contains $100, even in the world where the right envelope contains $100". Or if we have an AI that's unsure whether it values paperclips or staples, a nosy neighbor hypothesis would be "I value paperclips, even in the world where I value staples". I'm not sure how that makes sense. Can you give some scenario where a nosy neighbor hypothesis makes sense?

cousin_itΩ120

Imagine if we had narrowed down the human prior to two possibilities, P_1 and P_2 . Humans can’t figure out which one represents our beliefs better, but the superintelligent AI will be able to figure it out. Moreover, suppose that P_2 is bad enough that it will lead to a catastrophe from the human perspective (that is, from the P_1 perspective), even if the AI were using UDT with 50-50 uncertainty between the two. Clearly, we want the AI to be updateful about which of the two hypotheses is correct.

This seems like the central argument in the post, but I don't understand how it works.

Here's a toy example. Two envelopes, one contains $100, the other leads to a loss of $10000. We don't know which envelope is which, but it's possible to figure out by a long computation. So we make a money-maximizing UDT AI, whose prior is "the $100 is in whichever envelope {long_computation} says". Now if the AI has time to do the long computation, it'll do it and then open the right envelope. And if it doesn't have time to do the long computation, and is offered to open a random envelope or abstain, it will abstain. So it seems like ordinary UDT solves this example just fine. Can you explain where "updatefulness" comes in?

I think even one dragon would have a noticeable effect on the population of large animals in the area. The huge flying thing just has to eat so much every day, it's not even fun to imagine being one. If we invent shapeshifting, my preferred shape would be some medium-sized bird that can both fly and dive in the water, so that it can travel and live off the land with minimal impact. Though if we do get such technology, we'd probably have to invent territorial expansion as well, something like creating many alternate Earths where the new creatures could live.

cousin_it198

I'm not sure the “poverty equilibrium” is real. Poverty varies a lot by country and time period, various policies in various places have helped with poverty, so UBI might help as well. Though I think other policies (like free healthcare, or fixing housing laws) might help more per dollar.

cousin_it9-1

I think the main point of the essay might be wrong. It's not necessarily true that evolution will lead to a resurgence of high fertility. Yes, evolution is real, but it's also slow: it works on the scale of human lifetimes. Culture today evolves faster than that. It's possible that culture can keep adapting its fertility-lowering methods faster than humans can evolve defenses against them.

cousin_it1010

I think you're right, Georgism doesn't get passed because it goes against the interests of landowners who have overwhelming political influence. But if the actual problem we're trying to solve is high rents, maybe that doesn't require full Georgism? Maybe we just need to make construction legally easier. There's strong opposition to that too, but not as strong as literally all landowners.

It seems to me that land of the same quality as this can already be bought for cheaper in many places. The post says the new land could be more valuable because of better governance, but governance is an outcome of human politics, so it's orthogonal to old/new land. In Jules Verne's Propeller Island, a power conflict eventually leads to physical destruction of the island.

Load More