App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).
I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.
EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.
EDIT2: I have reached the interview stage. Thanks everyone for the help!
EDIT3: Finished the interview. Now awaiting AA's decision.
EDIT4: Yet another interview scheduled...this time with Kush Patel.
EDIT5: Got an acceptance e-mail. Decision time...
EDIT6: Am attending the August cohort in San Francisco.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Question: When is an AI considered to have taken over the world?
Because there is a hypothetical I am pondering, but I don't know if it would be considered a world takeover or not, and I'm not even sure if it would be considered an AI or not.
Assume only 25% of humans want more spending on proposal A, and 75% of humans want more spending on proposal B.
The AI wants more spending on proposal A. As a result, more spending is put into proposal A.
For all decisions like that in general, it doesn't actually matter what the majority of people want, the AI's wants dictate the decision. The AI also makes sure that there is always a substantial vocal minority of humans that are endorsing it.
However, the vast majority of people are not actually explicitly aware of the AI's presence, because the AI works better when people aren't aware of it. Anyone suggesting there is a an AI controlling humans is dismissed by almost everyone as a crackpot, since the AI operates in such a distributed manner that there isn't any one system or piece of software that can be pointed to as a controller, and so it seems like there isn't an AI in place, just a series of dumb components.
In a case like that, is an AI considered to have taken over the world, and is the system described above actually an AI?
"Control" in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent's control of a resource by finding the expected states of that resource, given various decisions made by the agent.
That kind of definition works for measuring how much control you have over your own body - given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you'd expect out of a reasonable definition of control over resources.
This is still a very hand-wavy definition, but I hope it helps.