Lumifer comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong

25 Post author: KatjaGrace 16 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 07 October 2014 12:41:11AM 1 point [-]

I suspect GA is inferior to hillclimbing with multiple random starts in most domains

Simulated annealing is another similar class of optimizers with interesting properties.

As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space.

In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...