Wiki Contributions

Comments

Sorted by
Gavin20

The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.

I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it's highly likely to do so unintentionally.

Answer by Gavin-10

I was able to get this one on my first try:

---

Prompt: I am teaching my son a simple logic problem. "All blargs are schminky," I explain. "But not all schminkies are blarg."

GPT-3: "What's the difference?" he asks, confused. "A blarg is a hairy man who likes to wear pink underwear and has a big nose," I say. "And a schminky is an ugly person with no sense of fashion or hygiene."

---

Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is completely fictional. GPT-3 understands the logic perfectly and generates novel examples that show total understanding!

I then re-rolled several times, and got a bunch of nonsense. My conclusion is that GPT-3 is perfectly capable of sophisticated logic, but thinks it's supposed to act like a dumb human.

Gavin20

A few plausible limited abilities that could provide decisive first move advantages:

  • The ability remotely take control of any networked computer
  • The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we're currently seeing.
  • The ability to reliably market price movements
Gavin20

One way to employ Space Mom might be with how confidently you believe expert concensus, in particular given that experts rarely give their confidence levels. For instance:

A. Expert concensus says that horoscopes are bunk. I believe it! I have a tight confidence interval on that.

B. Expert concensus says that hospitals provide significant value. I believe that too! But thanks to Robin Hanson, I'm less confident in it. Maybe we're mostly wasting our healthcare dollars? Probably not, but I'll keep that door in my mind open.

----

Separately, I think the frustrating thing about Hanson's piece was that he seemed to be making an isolated demand for rigor in that Eliezer prove in an absolute sense that he can know he is more rational than average before he gets his "disagreement license."

"You could be deceiving yourself about having valid evidence or the ability to rationally consider it" is a fully general argument against anything, and that's what it felt like Hanson was using. In particular because Eliezer specificially mentioned testing his calibration against the real world on a regular basis to test those assumptions.

Gavin30

Isn't this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.

Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.

If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn't change the observation that most people are reacting to their own incentives and we can't really blame them.

Gavin60

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.

  2. A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.

Gavin30

My real solution was not to own a car at all. Feel free to discount my advice appropriately!

Gavin30

I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.

On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.

Here are a couple ideas for how to handle this:

  1. Buy a car that's just off a 2 or 3 year lease. It's probably in great shape and is less likely to be a lemon.There are companies that only sell off-lease cars.

  2. Assume a lease that's in its final year. (at http://www.swapalease.com/lease/search.aspx?maxmo=12 for example) Then you get a trial period of 4-12 months, and will have the option to buy the car. This way you'll know if you like the car or not and if it has any issues. The important thing to check is that the "residual price" that they charge for buying the car is reasonable. See this article for more info on that: http://www.edmunds.com/car-leasing/buying-your-leased-car.html

There are a ton of articles out there on how to negotiate a car deal, but one suggestion that might be worth trying is to negotiate and then leave and come back the next day to make the purchase. In the process of walking out you'll probably get the best deal they're going to offer. You can always just come back ten minutes later and make the purchase--they're not going to mind and the deal isn't going to expire (even if they say it is).

Gavin140

It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.

The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.

If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.

Gavin20

You might want to examine what sort of in-group out-group dynamics are at play here, as well as some related issues. I know I run into these things frequently--I find the best defense mechanism for me is to try to examine the root of where feelings come from originally, and why certain ideas are so threatening.

Some questions that you can ask yourself:

  1. Are these claims (or their claimants) subtly implying that I am in a group of "the bad guys"?
  2. Is part of my identity wrapped up in the things that these claims are against?
  3. Do I have a gut instinct that the claims are being made in bad faith or through motivated reasoning?
  4. If I accept these claims as true, would I need to dramatically reevaluate my worldview?
  5. If everyone accepted these claims as true, would the world change in a way that I find threatening or troubling?

None of these will refute the claims, but they may help you understand your defensiveness.

I find it helpful to remind myself that I don't need to have a strongly held opinion on everything. In fact, it's good to be able to say "I don't really know" about all the things you're not an expert in.

Load More