online advertising

Saturday, June 4, 2016

Coursera Approximation Algorithms Part I - Peer Assignment 2

Problem:


Solution:

(1)

Note that the total value is $ p_1 v_1 + p_2 v_2 + p_3 v_3 = 2.1 p_1 + 2p_2 + 4.4 p_3 = 3.1 $.
Next, we need to honor the space constraint, which is $ p_1 s_1 + p_2 s_2 + p_3 s_3 = p_1 + 0.5 p_2 + 2 p_3 \le 1 $.

Of course we have $ p_1, p_2, p_3 \in [0, 1] $, that gives a linear programming problem (just maximize the value)

The linear programming solver conveniently gives the answer we need:

$ p_1 = 0 $
$ p_2 = 1 $
$ p_3 = 0.25 $

Just to verify, we have the value
$  v = 2.1 p_1 + 2p_2 + 4.4 p_3 =  2.1 (0) + 2 (1) + 4.4 (0.25) = 3.1 $
$  s = p_1 + 0.5 p_2 + 2 p_3 = (0) + 0.5 (1) + 2 (0.25) = 1 \le 1 $.

So that's our solution.

(2)

Note that we pick the maximum possible value for item 2. This seems to make sense because it is light and high value. This leads to the following greedy algorithm

Sort all the items from high to low by $ \frac{v}{s} $.
Fill the bag with the items, picking the maximum possible value (i.e. 1) whenever possible, and pick the remainder if not.

(3)

Suppose the contrary that the algorithm does not produce the optimal assignment, let the algorithm assignments be $ p_i $ and the optimal assignment be $ q_i $.

Since $ \sum p_i s_i = S $, any increment in $ p_i $ values would violate the weight constraint. Therefore there must exists a $ j $ such that $ q_j < p_j $.

Suppose that's the only difference between the $ p $ and $ q $, we know (obviously) that $ q $ reach a smaller optimal value, an immediate contradiction. There must also exist another $ k $ such that $ q_k > p_k $.

Now we argue $ q $ is not optimal because we can always increase its value, indeed, if we add $ \epsilon > 0 $ to $ j $, we can find the 'appropriate' adjustment for $ k $, then we can increase its value. To do so, we just keep weight constant.

$ (q_j + \epsilon) s_j + (q_k + \delta) s_k = q_j s_j + q_k s_k $
$ \epsilon s_j + \delta s_k = 0 $

Note that $ q_j < p_j \le 1 $ and $ q_k > p_k \ge 0 $, so we can also pick $ \epsilon $, $ \delta $ pair that can still respect the [0, 1] constraint.

Next, we argue the changed value always increase,

$ (q_j + \epsilon) v_j + (q_k + \delta) v_k $
$ = q_j v_j + q_k v_k  + \epsilon v_j + \delta v_k $
$ = q_j v_j + q_k v_k  + \epsilon v_j - \frac{\epsilon s_j}{s_k} v_k $
$ = q_j v_j + q_k v_k  + \epsilon (v_j - \frac{s_j}{s_k} v_k) $
$ = q_j v_j + q_k v_k  + \epsilon s_j (\frac{v_j}{s_j} - \frac{v_k}{s_k} ) $

The key idea is that because of the greedy nature of the proposed algorithm, we can be sure that $ \frac{v_j}{s_j} - \frac{v_k}{s_k} > 0 $ (The equality is removed due to the given uniqueness of the $ \frac{v_j}{s_j} $ values). Apparently we also have $ \epsilon s_j > 0 $, so we increased the value of the optimal solution.

This contradiction shows our algorithm is indeed give optimal solution.

(4)

We simply sort the values, so it takes $ O(n \log n) $ time for $ n $ items.

No comments :

Post a Comment