Comment author: Lumifer 10 March 2016 10:15:12PM *  4 points [-]

The micro capabilities of the AI could be limited

It's going to be a mess. Even if you, say, limit the AI's click-per-minute rate, it still has serious advantages. It knows how many fractions of a second can these units stay in the range of enemy artillery and still be able to pull back to recover. It knows whether those units will arrive in time to reinforce the defense or they'll be too late and should do something else instead.

Build choice is not all that complicated and with tactics you run right into micro.

Comment author: Furcas 10 March 2016 10:19:27PM 1 point [-]

Human-like uncertainty could be inserted into the AI's knowledge of those things, but yeah, as you say, it's going to be a mess. Probably best to pick another kind of game to beat humans at.

Comment author: Lumifer 10 March 2016 10:00:13PM 3 points [-]

RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.

I don't expect to see highly sophisticated AI in games (at least adversarial, battle-it-out games) because there is no point. Games have to be fun which means that the goal of the AI is to gracefully lose to the human player after making him exert some effort.

You might be interested in Angband Borg.

Comment author: Furcas 10 March 2016 10:04:23PM 2 points [-]

RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.

The micro capabilities of the AI could be limited so they're more or less equivalent to a human pro gamer's, forcing the AI to win via build choice and tactics.

Comment author: turchin 05 March 2016 11:27:46PM *  0 points [-]

because of what?

Something like we don't exist at all, because we are Bolzmann brains?

Comment author: Furcas 06 March 2016 02:21:35AM 1 point [-]

I think Jim means that if minds are patterns, there could be instances of our minds in a simulation (or more!) as well as in the base reality, so that we exist in both (until the simulation diverges from reality, if it ever does).

Comment author: Lumifer 22 February 2016 08:42:37PM 11 points [-]

LW might find that interesting:

I'm becoming a Christian, not just one who occasionally went to church as a kid, but a real one that believes in Christ, loving God with all my heart, etc.

Most ex-atheists who become deists turn to Buddhism, so I thought I'd be clear why they are all wrong (Robert Wright!). I'd like to thank Mencius Moldbug, Dierdre McCloskey, Mike Behe, Tim Keller (four names probably never listed in sequence ever), and hundreds more...Below are snippets (top and bottom) from my Christian apology: I came to Christ via rational inference, not a personal crisis.

Comment author: Furcas 23 February 2016 06:23:25PM 1 point [-]

Well, if nothing else, this is a good reminder that rationality has nothing to do with articulacy.

Comment author: ArisKatsaris 02 February 2016 12:21:20AM 1 point [-]

Online Videos Thread

Comment author: Furcas 02 February 2016 02:48:34AM *  2 points [-]

I strongly recommend JourneyQuest. It's a very smartly written and well acted fantasy webseries. It starts off mostly humorous but quickly becomes more serious. I think it's the sort of thing most LWers would enjoy. There are two seasons so far, with a third one coming in a few months if the Kickstarter succeeds

https://www.youtube.com/watch?v=pVORGr2fDk8&list=PLB600313D4723E21F

Comment author: Tem42 13 December 2015 10:05:44PM 0 points [-]

Pretty much the same sort of life as makes the death notable.

Comment author: Furcas 13 December 2015 11:30:31PM 1 point [-]

The person accomplished notable things?

Comment author: Furcas 11 December 2015 09:11:47PM *  3 points [-]

World's first anti-ageing drug could see humans live to 120

Anyone know anything about this?

The drug is metformin, currently used for Type 2 diabetes.

Comment author: Furcas 05 November 2015 06:35:10AM *  5 points [-]

You have understood Loosemore's point but you're making the same mistake he is. The AI in your example would understand the intent behind the words "maximize human happiness" perfectly well but that doesn't mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.

Comment author: Furcas 27 August 2015 06:57:53PM 14 points [-]

I donated $400.

Comment author: ZoltanBerrigomo 06 July 2015 12:28:08AM 7 points [-]

Hmm, on second thought, I added a [/parody] tag at the end of my post - just in case...

Comment author: Furcas 06 July 2015 02:12:38AM 4 points [-]

My cursor was literally pixels away from the downvote button. :)

View more: Prev | Next