All of Kevin Lacker's Comments + Replies

Answer by Kevin Lacker31

In high school, college, and graduate school I was not very hard working. But when I left school and started working in the tech industry I suddenly became very hard working. At first, I was surrounded by hard workers who were very smart, and I was very competitive, so I wanted to be one of the best. Over time, these other hard workers became my friends, and since we were all working together on hard projects, other people came to depend on me. If I said I would have X done by the end of the week, and I didn't get it done, my friends would be disappointed.... (read more)

I think you actually do not have very much power as a board member. During normal operations, you can give advice to the CEO, but you have no power beyond the "access to the CEO". If the CEO resigns or is being forced out, you briefly have an important power, but it is a very narrow power, the ability to find and vote in a replacement CEO.

The position is very public and respected so it may feel like "a lot of power" but quite often even a mid level employee at the organization has more real power over the direction of the company than a board member does.

I suspect this question is misworded:

Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?

Do you mean in which world GDP doubles? World GDP growth doubles when it goes from, say, 0.5% yearly growth to 1% yearly growth.

Personally, I suspect world GDP is most likely to next double in a period after a severe war or depression, so you might want to rephrase to avoid that scenario if that isn't what you're thinking about.

6Amandango
This was a good catch! I did actually mean world GDP, not world GDP growth. Because people have already predicted on this, I added the correct questions above as new questions, and am leaving the previous questions here for reference: Elicit Prediction (elicit.org/binary/questions/3PyXoU0ac) Elicit Prediction (elicit.org/binary/questions/Lu_U2Mz-M) Elicit Prediction (elicit.org/binary/questions/0oZaRoJEt)

I believe there already is a powerful AI persuasion tool: the Facebook algorithm. It is attempting to persuade you to stay on Facebook, to keep reading Facebook so that their "engagement" metrics are optimized. Indeed, many of the world's top AI researchers are employed in building this tool. So far it is focused much more on ranking than on text generation, but if AI text generation improves to the extent that it's interesting for humans to read, I would expect Facebook to incorporate that into newsfeed. AI-generated "news summaries" might be one area that this happens first.

3Daniel Kokotajlo
Yes, this is one of the examples I had in mind of "Feeders."

I don't think the metaphor about writing code works. You say, "Imagine a company has to solve a problem that takes about 900,000 lines of code." But in practice, a company never possesses that information. They know what problem they have to solve, but not how many lines of code it will take. Certainly not when it's on the order of a million lines.

For example, let's say you're working for a pizza chain that already does delivery, and you want to launch a mobile app to let people order your food. You can decompose that into parts pretty reasonably - you nee... (read more)

2Rafael Harth
I agree that the analogy doesn't work in every way; my judgment was that the aspects that are non-analogous don't significantly distract from the point. I think the primary difference is that software development has an external (as in, outside-the-human) component: in the software case, understanding the precise input/output behavior of a component isn't synonymous with having 'solved' that part of the problem; you also have to implement the code. But the way in which the existence of the remaining problem leads to increased complexity from the perspective of the team working on the bottom-left part -- and that's the key point -- seems perfectly analogous. I've updated downward on how domain-specific I think FC is throughout writing the sequence, but I don't have strong evidence on that point. I initially began by thinking and writing about exactly this, but the results were not super impressive and I eventually decided to exclude them entirely. Everything in the current version of the sequence is domain-general.

I would not spend $500 on such an event because an event held by my local rationality community doesn't seem very important to me. You may have a different opinion about your $500 and your local rationality community and that's fine.

We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).

I think this is a good way of putting it. Many people in the debate refer to "regulation". But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that A... (read more)

Answer by Kevin Lacker10

If the government is going to mandate something, it should also pay for it.

This isn't really how government mandates work. The government mandates that you wear seat belts in cars, but it doesn't pay for seat belts. The government mandates that all companies going public follow the SEC regulations on reporting, but it doesn't pay for that reporting to happen. The government mandates that restaurants regularly clean up the floor, but it doesn't pay for janitors. The government mandates that you wear clothes in public, but it doesn't buy you clothes. Etc, etc.

So I think your intuition is simple, but it largely does not map to reality.

1Logan Zoellner
Yep.  This definitely not how it's done in the "real world". In the "seat belts" example, this would involve replacing a law mandating seat-belts what a (presumably high) tax on selling vehicles without seatbelts set to equal the economic/social benefits of seat belts. I think as a matter of pragmatism, there are cases where an outright ban is more/less reasonable than trying to determine the appropriate tax.  For example, I don't think anyone thinks that the "social  cost" of dumping nuclear waste into a river is something we actually want to contemplate.

Now, for the rats, there’s an evolutionarily-adaptive goal of "when in a salt-deprived state, try to eat salt". The genome is “trying” to install that goal in the rat’s brain. And apparently, it worked! That goal was installed! And remarkably, that goal was installed even before that situation was ever encountered!

I don't think this is remarkable. Plenty of human activities work this way, where some goal has been encoded through evolution. For example, heterosexual teenage boys often find teenage girls to be attractive and want to get them naked, even befo... (read more)

Well, the brain does a lot of impressive things :-) We shouldn't be less impressed by any one impressive thing just because there are many other impressive things too.

Anyway I wrote this blog post last year where I went through a list of universal human behaviors and tried to think about how they could work. I've learned more since writing that, and I think I got some of the explanations wrong, but it's still a good starting point.

What about sexual attraction?

Without getting into too much detail, I would say that sexual attraction involves the same "superv... (read more)

4Raemon
The interesting thing is that unlike nipple-shaped objects, levers that produce saltwater don't exist in the ancestral environment.
Answer by Kevin Lacker80

It just really depends on what the project is. If there were some generic way to evaluate all $500 donations, then some centralized organization would be doing that already. You have to use your own personal, human judgment.

2ChristianKl
Let's look at the example of your local rationality community wanting to host a big event. It seems like there's no room easily available for that size but you could spend $500 for room rent for the event. Given that you have a better understanding of your local community and the value the event would provide them someone who evaluates grants at LTFF it would be a very inefficient process if such a project would be funded by you giving money to LTFF and then LTFF reading a grant review for the project and deciding based on the grant review without knowledge of the local community that the project is worth funding.  How would you go about using your judgment in that case?
4Alexei
This seems like a cop out answer, but I wholeheartedly agree.

You can change the world, sure, but not by making a heartfelt appeal to the United Nations. You have to be thoughtful, which means you pick tactics with some chance of success. Appealing to stop AI work is out of the political realm of possibility right now.

The first algorithm I would use is this, to solve problems of mimicking a function with provided inputs and outputs:

For all possible programs of length less than X, run that program on the inputs for time Y. Then measure how close it comes to the outputs. The closest program is then your model.

This takes time O(Y*2^X) so it's impractical in the world we live in, but in this hypothetical world it would work pretty well. This only solves the "classification" or "modeling" type of machine learning problems, rather than reinforcement learning per se, but that ... (read more)

It seems like there is very little chance that you could politically stop work on AGI. Who would you appeal to - China, the U.S., and the United Nations would all have to agree? There just isn't anywhere near the political consensus necessary for it. The general consensus is that any risk is very small, that it's the sort of thing only a few weirdos on the internet worry about. That consensus would have to change before it makes any sense to ask questions like, should we postpone all AGI research indefinitely. I think we need to accept that worry about AGI is a fringe belief, and therefore we should pursue strategies that can have an impact despite it being a fringe belief.

2otto.barten
I think a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.

Isn't this what real estate developers do? You buy up some land somewhere that is a combination of inexpensive and desirable, like Dallas or Jacksonville. Then you attract people by building housing there. The land is cheap, you just have to move to the exurbs in a red state. You can set up a homeowners association with a wide variety of rules, although I suspect that the optimal ones may be closer to the way homeowners associations currently operate, rather than the decisionmaking procedures you propose here. But it has been a flexible enough process to build things like carless cities or cities aimed at the needs of senior citizens.

Answer by Kevin Lacker40

You could probably make drastic improvements in AI, because you could do extremely expensive things like modeling any function by minimizing its Kolmogorov complexity. I bet you could develop superhuman AI within a day, if given access to a computer with 2^2^100 clock speed.

3Yitz
You're a lot braver than me! I'd be absolutely terrified of trying to create anything anywhere near superhuman AI (as in AGI, of course; I'd be fine with trying to exceed humans on things like efficient protein folding and that sort of stuff), due to the massive existential risk from AGI that LessWrong loves talking about in every other post. Personally, I would wait to get the world's leading AI ethics experts's unanimous approval before trying anything like that, and that only after at least a few months of thorough discussion. An exception to that might be if I was afraid that the laptop would fall into the hands of bad actors, in which case I'd probably call up MIRI and just do whatever they tell me to do as fast as humanly possible. I do agree with you though; it probably would be perfectly possible to develop superhuman AI within a day, given such power. It is worth asking what sort of algorithm you might use, and perhaps more importantly, what would you define as the "win condition" for your program? Going for something like a massively larger version of GPT3 would probably pass the Turing test with relative ease, but I'm not sure it would be capable of generating smarter-than-human insight, since it will only attempt to resemble what already exists in its training data. How would you go about it, if you weren't terribly concerned about AI safety?

I started the process to accept this free money, but once I read enough of the fine print I bailed. Basically, if PredictIt is taking a 5% cut of my withdrawals, then I'm going to lose money if it so happens that I can't get enough of this contract. It feels more like even-money gambling, where I lose if the PredictIt market happens to lose liquidity by the time they verify me, and I win if I actually fill out all the forms in time. If there were more money at stake, maybe I'd work a bit harder to jump through all the hoops, but it just isn't worth it to riskily earn $20.

Answer by Kevin Lacker40

The most important social technology we have is government. Government has a very important role: to prevent members of the same country from killing each other. The biggest flaw in our current government technology is that we have no good system for coming to agreement among the 200-ish countries in the world. We have had the UN for the past 75 years, but there have still been plenty of wars and plenty of fears of nuclear war, and there's still a good chance of nuclear war in the future. Nations still spend large amounts of money on armies that are essent... (read more)