BaconServ

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?

It's easy to look back after the fact and say, "The market handled it!" But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.

Looking at the two markets:

  1. MIRI's warning of uFAI is popularized.
  2. MIRI's warning of uFAI continues in obscurity.

The latter just seems a ton less likely to mitigate uFAI risks than the former.

It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.

Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

BaconServ-40

How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?

NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn't need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.

If you want to do something you can, earn to give and give money to MIRI.

That is not a valid path if MIRI is willfully ignoring valid solutions.

You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you.

It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.

We probably don't have to solve it in the next 5 years.

Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI's warning isn't properly heeded?

BaconServ-20

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue.

Your reasoning for why the "bad" publicity would have severe (or any notable) repercussions isn't apparent.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

This just doesn't seem very realistic when you consider all the variables.

Load More