Comment author: [deleted] 28 April 2014 03:54:27PM 3 points [-]

Question: When is an AI considered to have taken over the world?

Because there is a hypothetical I am pondering, but I don't know if it would be considered a world takeover or not, and I'm not even sure if it would be considered an AI or not.

Assume only 25% of humans want more spending on proposal A, and 75% of humans want more spending on proposal B.

The AI wants more spending on proposal A. As a result, more spending is put into proposal A.

For all decisions like that in general, it doesn't actually matter what the majority of people want, the AI's wants dictate the decision. The AI also makes sure that there is always a substantial vocal minority of humans that are endorsing it.

However, the vast majority of people are not actually explicitly aware of the AI's presence, because the AI works better when people aren't aware of it. Anyone suggesting there is a an AI controlling humans is dismissed by almost everyone as a crackpot, since the AI operates in such a distributed manner that there isn't any one system or piece of software that can be pointed to as a controller, and so it seems like there isn't an AI in place, just a series of dumb components.

In a case like that, is an AI considered to have taken over the world, and is the system described above actually an AI?

Comment author: ThrustVectoring 29 April 2014 02:21:56AM 4 points [-]

"Control" in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent's control of a resource by finding the expected states of that resource, given various decisions made by the agent.

That kind of definition works for measuring how much control you have over your own body - given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you'd expect out of a reasonable definition of control over resources.

This is still a very hand-wavy definition, but I hope it helps.

Comment author: Jayson_Virissimo 08 April 2014 06:06:03PM *  11 points [-]

App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).

I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.

EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.

EDIT2: I have reached the interview stage. Thanks everyone for the help!

EDIT3: Finished the interview. Now awaiting AA's decision.

EDIT4: Yet another interview scheduled...this time with Kush Patel.

EDIT5: Got an acceptance e-mail. Decision time...

EDIT6: Am attending the August cohort in San Francisco.

Comment author: ThrustVectoring 10 April 2014 07:49:07PM 2 points [-]

I'm a current student who started two weeks ago on Monday. I'd be happy to talk as well.

Comment author: ThrustVectoring 05 April 2014 07:11:05PM 0 points [-]

Dollars already have value. You need to give them to the US government in order to produce valuable goods and services inside the United States. That's all there is to it, really - if someone wants to make #{product} in a US plant, they now owe US dollars to the government, which they need to acquire by selling #{product}. So if you have US dollars, you can buy things like #{product}.

That's the concise answer.

Comment author: ThrustVectoring 26 March 2014 09:48:07PM 11 points [-]

The real danger of the "win-more" concept is that it's only barely different than making choices that turn an advantage into a win. You're often put in a place where you're somehow ahead, but your opponent has ways to get back in the game. They don't have them yet - you wouldn't be winning if they did - but the longer you give them the more time they have.

For a personal example from a couple years ago, playing Magic in the Legacy format, I once went up against a Reanimator deck with my mono-blue control deck. The start was fairly typical - Reanimator trying to resolve a gigantic threat to win, while I played many counterspells and hit him with some Vendilion Clique beats. My opponent ended up getting an Iona out (naming blue, obviously), but went down to exactly one life to do so. This was very, very awkward for him, since he couldn't attack with the Iona, activate fetchlands, or use the alternate cost of Force of Will. But, I had outs - Powder Keg (7 copies of keg/ratchet bomb) and waiting 9 turns, or Vedalken Shackles (3 copies). So I stayed in, and got as many draw phases as I could, and lucked out with a Shackles topdeck, followed by being able to play blue spells and winning the game.

Anyhow, my point is that cards that help you only when you're winning can turn wins into losses. Your opponents can have outs, and it's a good idea to take those outs away. If you don't, then sometimes your opponent will pull exactly what they need to do something ridiculous - say, dealing with a card that keeps them from playing 28 of their 38 spells, and seven of the ten spells they can play take 9 turns to do anything about it.

"Win-more" is definitely the wrong word to describe this concept. I think a better choice is calling it a "close-out" or "finishing" card. The point of these is to make sure that you win when you have an advantage. It also tells you that you don't want too many of these - many decks run just one or two copies. Dredge, for instance, runs a single Flayer to turn having their deck in their graveyard into a win. My mono-blue control deck ran two Sphinx of Jwar Isle (there were essentially zero answers for him in the meta, and I've stolen games with him. That said, one copy would be an Aetherling if it was printed at the time).

Replacing a card with a finisher means that you'll take fewer leads, but win more games while ahead. Sometimes the right number of finishers is one - when Dredge has a lead, it's got access to all or most of the cards in their deck. Sometimes it's more - my mono-blue deck would run between 2 and 6, depending on how I felt about Jace at the time. Often it's zero, and your game plan is to win with the cards that got you ahead in the first place.

Comment author: ThrustVectoring 16 March 2014 09:51:27PM 4 points [-]

I read a comment in this thread by Armok_GoB, and it reminded me of some machine-learning angles you could take on this problem. Forgive me if I make a fool of myself on this, I'm fairly rusty. Here's my first guess as to how I'd solve the following:

open problem: the tradeoff of searching for an exact solution versus having a good approximation

Take a bunch of proven statements, and look at half of them. Generate a bunch of possible heuristics, and score them based on how well they predict the other half of the proven statements given the first half as proven. Keep track of how long it takes to apply a heuristic. Use the weighted combination of heuristics that worked best on known data, given various run-time constraints.

With a table of heuristic combinations and their historical effectiveness and computational time, and the expected value of having accurate information, you can quickly compute the expected value of running the heuristics. Then compare it against the expected computation time to see if it's worth running.

Finally, you can update the heuristics themselves whenever you decide to add more proofs. You can also check short run-time heuristics with longer run-time ones. Things that work better than you expected, you should expect to work better.

Oh, and the value-of-information calculation I mentioned earlier can be used to pick up some cheap computational cycles as well - if it turns out that whether or not the billionth digit of pi is "3" is worth $3.50, you can simply decide to not care about that question.

And to be rigorous, here are the hand-waving parts of this plan:

  1. Generate heuristics. How? I mean, you could simply write every program that takes a list of proofs, starting at the simplest, and start checking them. That seems very inefficient, though. There may be machine learning techniques for this that I simply have not been exposed to.

  2. Given a list of heuristics, how do you determine how well they work? I'm pretty sure this is a known-solved problem, but I can't remember the exact solution. If I remember right it's something along the lines of log-difference, where getting something wrong is worth more penalty points the more certain you are about it.

  3. Given a list heuristics, how do you find the best weighted combinations under a run-time constraint? This is a gigantic mess of linear algebra.

And another problem with it that I just found is that there's no room for meta-heuristics. If the proofs come in two distinguishable groups that are separately amenable to two different heuristics, then it's a really good idea to separate out these two groups and applying the better approach for that group. My approach seems like it'd be likely to miss this sort of insight.

Comment author: Squark 15 March 2014 11:32:30AM 0 points [-]

This is already addressed in the post (a late addition maybe?)

Comment author: ThrustVectoring 15 March 2014 01:49:56PM 0 points [-]

Yeah, it wasn't there when I posted the above. The "donate to the top charity on GiveWell" plan is a very good example of what I was talking about.

Comment author: ThrustVectoring 14 March 2014 09:00:58AM 2 points [-]

There are timeless decision theory and coordination-without-communication issues that make diversifying your charitable contributions worthwhile.

In short, you're not just allocating your money when you make a contribution, but you're also choosing which strategy to use for everyone who's thinking sufficiently like you are. If the optimal overall distribution is a mix of funding different charities (say, because any specific charity has only so much low-hanging fruit that it can access), then the optimal personal donation can be mixed.

You can model this by a function that maps your charitable giving to society's charitable giving after you make your choice, but it's not at all clear what this function should look like. It's not simply tacking on your contribution, since your choice isn't made in a vacuum.

Comment author: Metus 11 March 2014 03:33:09AM *  12 points [-]

Now that would be an interesting topic: The rationalist hobo.

I am actually considering something similar. There is the extremely early retirement community where the general suggestion is to earn much money in very short amount of time, to live below means in that time, to invest as much of it as possible and to then live from the interest gathered. Driven to extremes the necessary base capital can be quite low, such as in the low hundreds of thousands.

As interest is mobile and I can relocate to a country that almost does not tax capital interest I am free to roam the world. Additional income can come from local work or donations as I intend to still work some amount of time in theoretical research which essentially is just time consuming without need for capital expenditure.

For some time at least this would be very interesting.

Edit: The availability of so much free learning material online makes this even more viable. The only issue will be maintaining a good exercise regimen and good eating habits.

Edit 2: If you can learn remotely, you can work remotely. Being on the road does not preclude doing analysis or similar stuff to stil learn an income.

Comment author: ThrustVectoring 11 March 2014 07:30:59PM 0 points [-]

There is a huge amount of risk involved in retiring early. You're essentially betting that you aren't going to find any fun, useful, enjoyable, or otherwise worthwhile uses of money. You're betting that whatever resources you have at retirement are going to be enough, at a ratio of whatever your current earning power is to your expected earning power after the retirement decision.

Comment author: ThrustVectoring 11 March 2014 07:20:59PM 5 points [-]

Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.

That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like "classical physics is an almost entirely accurate description of the world at a macro scale" - inaccurate models would contradict observations of the world and get replaced with more accurate ones.

Granted, standard opinions often are standard because they are right. But, the more you can separate out the standard beliefs into ones with stronger and weaker links to correctness, the more this effect shows up in the former and not the latter.

To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue.

I think that's on the same page as my initial thoughts on the matter. At least, it is a useful heuristic that applies more to correct standard beliefs than incorrect ones.

Comment author: Punoxysm 07 March 2014 04:25:43PM 0 points [-]

In addition to what V_V says below, there could be absolutely no official circumstance under which the AI should be released from the box: that iteration of the AI can be used solely for experimentation, and only the next version with substantial changes based on the results of those experiments and independent experiments would be a candidate for release.

Again, this is not perfect, but it gives some more time for better safety methods or architectures to catch up to the problem of safety while still gaining some benefits from a potentially unsafe AI.

Comment author: ThrustVectoring 07 March 2014 10:32:40PM 0 points [-]

Taking source code from a boxed AI and using it elsewhere is equivalent to partially letting it out of the box - especially if how the AI works is not particularly well understood.

View more: Prev | Next