Let's review this. How many ways are there to make 0 in change, whatever the available coins? There's 1 way. How many ways are there to make some other quantity in change if you have no coins? 0. Finally, how many ways are there to make C in change if you have a a list of coins h::t? Partition the problem thus: use the coin h to pay out a partition of C (obviously you can use coin h up to C/h times), then make up the remaining partition of C with the rest of the list. If, to solve each subproblem, you're dividing C by the denomination of a coin, then the size of the problem space is exponential in the number of denominations: (C/denomination1) * (C/denomination2) * ... * (C/denominationn) = O(Cn). You can't exactly tell from this example, but the rate of growth here is clearly superpolynomial:
let rec countChange =
match coins, amount with
¦ _, 0 -> 1L
¦ [], _ -> 0L
¦ h::t, _ -> [ 0..h..amount ]
¦> List.map (fun amt' -> countChange t (amount - amt'))
¦> Seq.fold1 (+)
Anyway, it should be clear why there's no point in waiting for the solution to countChange [ 1..99 ] 100. However, I can memoize the change counting algorithm, since it's deterministic: for a given amount, and a given set of coins, the answer is always the same (commerce would be hopeless otherwise). This turns out to be fairly easy, notwithstanding the two arguments to the function. Here's my answer:
> countChange [ 4; 5 ] 20;;
val it : int64 = 2L
> countChange [ 3; 4; 5 ] 20;;
val it : int64 = 6L
> countChange [ 3; 4; 5; 6 ] 20;;
val it : int64 = 11L
> countChange [2; 3; 4; 5; 6 ] 20;;
val it : int64 = 47L
>
The answer shows up in a few seconds. Since F# lists are immutable, they can easily be compared for reference equality, rather than in O(n2) time. That is, the initial list [ 99..(-1)..1 ] is a 99 prepended to the list [ 98..(-1)..1 ], and so on. We partition the problem into subproblems corresponding to the head and tail of the list, but no additional lists are created beyond the argument to the function (there's only one list beginning with 50, ever, and it's the tail of the list beginning with 51). Microsoft.FSharp.Collections.List<_>.CompareTo() sees literally the same lists again and again when called from memoize(). Hence the lickety-split performance.
open System.Collections.Generic;;
let memoize f =
let cache = Dictionary<_,>()
fun x y ->
if cache.ContainsKey((x, y)) then cache.[(x, y)]
// Remember that f will always consult the memo
// for subproblems.
else let result = f x y
cache.[(x, y)] <- result
result
let rec countChange =
// Turn this into a function that returns a function.
// The *returned* function will be evaluated over the arguments.
let countChange' coins amount =
match coins, amount with
¦ _, 0 -> 1L
¦ [], _ -> 0L
¦ h::t, _ -> [ 0..h..amount ]
¦> List.map (fun amt' -> countChange t (amount - amt'))
¦> Seq.fold1 (+)
memoize countChange'
Perhaps this could all be done imperatively, though I don't how OOP is at all applicable to this sort of classic optimization problem (what could justify a Coin class here)? If you haven't read Daniel Friedman's "The Role of the Study of Programming Languages in the Education of a Programmer," here's your chance. As my buddy George Paci puts it, "The title is clearly written by a professor; a marketing guy would have called it something like, 'Holy Crap! It's Amazing What You Can Do with Program Transformations!'"
No comments:
Post a Comment