Implementation of Artificial Intelligence advanced search algorithms
Very simple idea: Start from some state s, Move to a neighbor t with better score. Repeat.
• (vaguely) Problems tend to have structures. A small change produces a neighboring state. • The neighborhood must be small enough for efficiency • Designing the neighborhood is critical. This is the real ingenuity – not the decision to use hill climbing.
The best one (greedy)
Stop. (Doh!)
- Pick initial state s
- Pick t in neighbors(s) with the largest f(t)
- IF f(t) <= f(s) THEN stop, return s
- s = t. GOTO 2.
- Pick initial state s
- Randomly pick t in neighbors(s)
- IF f(t) better THEN accept st.
- ELSE /* t is worse than s */
- accept st with a small probability
- GOTO 2 until bored.
current = Initial-State(problem)
for t = 1 to infinit do
- T = Schedule(t) ; // T is the current temperature, which is monotonically decreasing with t
- if T=0 then return current ; //halt when temperature = 0
- next = Select-Random-Successor-State(current)
- deltaE = f(next) - f(current) ; // If positive, next is better than current. Otherwise, next is worse than current.
- if deltaE > 0 then current = next; // always move to a better state
- else current = next with probability p = exp(deltaE / T); // as T -> 0, p -> 0; as deltaE -> -infinit , p -> 0
end
• Cooling scheme important
• Neighborhood design is the real ingenuity, not the decision to use simulated annealing.
• Not much to say theoretically
• With infinitely slow cooling rate, finds global optimum with probability 1.
• Proposed by Metropolis in 1953 based on the analogy that alloys manage to find a near global minimum energy state, when annealed slowly.
• Easy to implement.
• Try hill-climbing with random restarts first!