Comment author: [deleted] 21 July 2014 01:20:37PM 5 points [-]

Do not buy altcoins. It will not end well.

In response to comment by [deleted] on Look for the Next Tech Gold Rush?
Comment author: oooo 22 July 2014 02:51:39AM *  0 points [-]

Not even Freicoins? Or do you mean weird, un-innovative altcoins that seem to add no real value to the ecosystem? Generally, it's hard to tell, but it seems certain altcoins try hard to differentiate themselves. Monero and Freicoin stand out as sufficiently different that they could turn out to be valuable (as far as altcoins go).

EDIT: Read the rest of the comments and noticed that you explicitly stated Freicoin is a possible exception to the rule. People may also be interested (although may also choose for various reasons not to invest) in other altcoins such as Ethereum (too complicated?), Zerocash (untested moon math?), Counterparty ("let's reduce the size of OP_RETURN and see how XCP reacts!"), Swarm (based on Counterparty) or Darkcoin (ninja premine?).

Comment author: Louie 28 April 2014 03:10:32AM 4 points [-]

Yes. I assume this is why she's collecting these ideas.

Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".

In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.

Comment author: oooo 28 April 2014 03:50:33AM *  2 points [-]

OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.

Louie: >>Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".

These two statements contradict each other. If it's true that Katja doesn't speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise.

EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI's actual position on this request.

Comment author: James_Miller 28 April 2014 01:51:26AM *  12 points [-]

The AI could gain control by demonstrating it had hidden pathogens that if released would kill almost everyone. As Paul Atreides said "He who can destroy a thing, controls a thing." As the technology to make such pathogens probably already exists the AI could hack into various labs and give instructions to people or machines to make the pathogens, then send orders for the pathogens to be delivered to various places, and then erase records of where most of the pathogens were. The AI then blackmails mankind into subservience. Alternatively, the AI could first develop a treatment for the pathogens, then release the pathogens, and then give the treatment only to people who submit to the AI. The treatment would have to be regularly taken and difficult to copy.

More benevolently, the AI makes a huge amount of money off of financial markets, uses the resources to start its own country, runs the country really, really well and expands citizenship to anyone who joins. Eventually, when the country is strong enough, the AI (with the genuine support of most people) uses military force to take over the world, giving us an AI monarchy.

Or, the AI freely gives advice to anyone who asks. The advice is so good that most people follow it. Organizations and people that follow the advice do much better (and get far more power) than those that don't. The AI effectively gains control of the world. If the AI wants to speed up the process, it only gives advice to people who refuse to interact with organizations that don't listen to the AI.

The AI identifies a group of extremely smart people and tricks them into answering the "hypothetical" question "how could an AI take over the world?"

Comment author: oooo 28 April 2014 03:44:17AM -1 points [-]

Upvoted for only this sentence fragment: "More benevolently, the AI makes a huge amount of money off of financial markets [...]".

Comment author: XiXiDu 30 January 2014 09:20:38AM *  5 points [-]

By the way, I asked Shane Legg for a follow-up, but he replied that they were not currently doing any media so he's unable to comment further.

Here are the questions I wanted to ask him (maybe he can reply in future):

Q1. Has your opinion about risks associated with artificial general intelligence changed since 2011?

Q2. Can you comment on the creation of an internal ethics board at Google?

Q3. To what extent do people within DeepMind and Google agree with the general position of the Machine Intelligence Research Institute?

Q4. Do you believe that Google will create an artificial general intelligence?

Q5. Do you have any general comments for the LessWrong community regarding Google and their recent acquisition of DeepMind?

Comment author: oooo 30 January 2014 03:51:00PM 1 point [-]

Q6. How much influence will the ethics committee actually have? For example, are there commercial and IP clawback provisions if the committee is deemed to be ignored or sidelined?

Comment author: fubarobfusco 13 January 2014 01:16:25AM 1 point [-]

Montessori education: Good idea? Bad idea? Fish?

Comment author: oooo 13 January 2014 02:38:34AM 0 points [-]

A North American non-Montessori educator (director of daycare) said that Montessori is different in various parts of the world. I did not do more research into this, and obviously this comment can be easily biased and seen to have an agenda. However, based on this comment alone, I'm also interested in whether you (Gunnar_Zarncke) thought about putting your children through (European) Montessori.

Comment author: James_Miller 27 December 2013 05:18:38PM 1 point [-]

No, although you could pool resources with a friend. If you have less than, say, $100,000 to invest you really, really shouldn't be speculating on Bitcoins.

eBay is your best starting point for finding exotic markets.

Comment author: oooo 27 December 2013 06:24:48PM *  5 points [-]

Unlike real estate which requires much higher amounts of capital (read: your after-tax savings) to invest in, Bitcoins and other cryptocurrencies allow for people with just double-digit or less discretionary income to speculate.

In this manner, speculators/gamblers/investors are able to gain some experience with actual money and trading. The fees on the cryptocurrency exchanges are rather low, and since cryptocurrencies can go down to multiple decimal places, transaction fees of 0.45% (for example) are still feasible even on sub-$1 trades.

Of course, one could say that play money is just as useful for this type of scenario, but I think there's a cognitive fallacy that tries to explain how people behave when real vs. imaginary money is in play, even though the net effect is essentially the same (let's ignore the salient point that just $100 invested in Bitcoin in Jan 2013 would have netted $5000 in Dec 2013 as that needlessly distorts the point).

EDIT: One is unlikely to outguess the bitcoin market vs. any other exotic or local real estate market. However, cryptocurrencies allow for one to cheaply test whether they can outguess or not. Real estate is not cheap to test your prediction skills.

Comment author: oooo 21 December 2013 02:08:37AM *  8 points [-]

"Knowing others is intelligence; knowing yourself is true wisdom. Mastering others is strength, mastering yourself is true power."

-Lao Tzu (c.604 - 531 B.C.)

Comment author: EGarrett 09 December 2013 05:01:31AM -1 points [-]

I've read and seen some really thought-provoking material on ways in which the free market could supposedly do a lot of traditional government roles. There are also sites like judge.me which are testing some of it out, including private contract enforcement and law. So I wouldn't automatically say that government is better at certain things.

What kind of perverse incentives are you concerned with? There is certainly some incentive to do things like using force and deception to get money or resources, but the market also includes a mechanism for punishing this and disincentivizing that type of behavior, and I'd say the same incentive exists in governments.

Comment author: oooo 09 December 2013 05:20:08AM *  0 points [-]

Judge.me was shutdown in July 2013, but evidently Net-Arb is another service carrying on the Judge.me torch and focusing primarily on internet arbitration.

Comment author: John_Maxwell_IV 05 December 2013 06:31:45AM 2 points [-]

If AI developers are sufficiently concerned about this risk, maybe they could develop AI in a large international consortium?

Comment author: oooo 05 December 2013 09:46:50PM *  2 points [-]

How much would AI developers be willing to sacrifice? They may be sufficiently concerned to at this risk as explained, but motivated and well-funded organizations (or governments) should have no problem attempting to influence, persuade or convert a fraction of AI developers to think otherwise.

I wonder if global climate change can be used as an analogy highlighting what some climate scientists are willing to publish due to funding and/or other incentives beyond scientific inquiry.

Comment author: JoshuaFox 05 December 2013 09:05:15PM *  4 points [-]

Yes, but fear of a Snowden would make project leaders distrustful of their own staff.

And if many top researchers in the field were known to be publicly opposed to any unsafe project that the agencies are likely to create, it would shrink their recruiting pool.

The idea is to create a moral norm in the community. The norm can be violated, but it would put a crimp in the projects as compared to a situation where there is no such moral norm.

Comment author: oooo 05 December 2013 09:41:16PM 2 points [-]

This presupposes that the AGI community is, on average, homogenous across the world and would behave accordingly. What if the political climates, traditions and culture make certain (powerful) countries less likely to be fearful given their own AGI pool?

In otherwords, if country A distrusts their staff more than country B due to political/economic/cultural factors, country A would be behind in the AGI arms race, which would lead to the "even if I hold onto my morals, we're still heading into the abyss" attitude. I could see organizations or governments rationalizing against the community moral pledge in this way by highlighting the futility of slowing down the research.

View more: Next