We are here with you hands in hands to facilitate your learning & don't appreciate the idea of copying or replicating solutions. Read More>>

Looking For Something at vustudents.ning.com? Click Here to Search

www.bit.ly/vucodes

+ Link For Assignments, GDBs & Online Quizzes Solution

www.bit.ly/papersvu

+ Link For Past Papers, Solved MCQs, Short Notes & More


Dear Students! Share your Assignments / GDBs / Quizzes files as you receive in your LMS, So it can be discussed/solved timely. Add Discussion

How to Add New Discussion in Study Group ? Step By Step Guide Click Here.

Please note that Graded Discussion Board (GDB) of CS502 will be started from July
16, 2013 and will be closed on July 17, 2013.

Topic for discussion:

As both Dynamic Programming and Greedy Strategy are used for solving Optimization
problems and influenced by optimal-sub property; it might possible that one can
attempt to find Greedy solution, if in fact Dynamic Programming is the right
approach or vice versa. With this background, Knapsack problem (either 0/1 or
Fractional) is a classic optimization problem to investigate with.

You are required to support or contradict the given below statement with proper
justification.

“Dynamic Programming yields optimal solution for 0/1 Knapsack problem while Greedy
approach does not. In addition, Greedy approach yields optimal solution for
Fractional Knapsack problem”.

+ How to Follow the New Added Discussions at Your Mail Address?

+ How to Join Subject Study Groups & Get Helping Material?

+ How to become Top Reputation, Angels, Intellectual, Featured Members & Moderators?

+ VU Students Reserves The Right to Delete Your Profile, If?


See Your Saved Posts Timeline

Views: 4249

.

+ http://bit.ly/vucodes (Link for Assignments, GDBs & Online Quizzes Solution)

+ http://bit.ly/papersvu (Link for Past Papers, Solved MCQs, Short Notes & More)

+ Click Here to Search (Looking For something at vustudents.ning.com?)

+ Click Here To Join (Our facebook study Group)

Replies to This Discussion

Jasmine net ka pata ni hai Q k main net se deak ni kiya hai book parho us main ne GDB kar leiy mian to 2 GDB kiye hain main is k easy hai

koi reason b bata dain ak plz

plz koi complete solution send kar dy..mein chnage kar kay send kar don gi...time kam hai..aur aj 3 gdbs hain...

https://en.wikipedia.org/wiki/Dynamic_programming check this link for more help in gdb

As both Dynamic Programming and Greedy Strategy are used for solving Optimization problems and influenced by (optimal) sub property; it might possible that one can attempt to find Greedy solution, if in fact Dynamic Programming is the right approach or vice versa. With this background, Knapsack problem (either 0/1 or Fractional) is a classic optimization problem to investigate with.


You are required to support or contradict the given below statement with proper justification.


“Dynamic Programming yields optimal solution for 0/1 Knapsack problem while Greedy approach does not. In addition, Greedy approach yields optimal solution for Fractional Knapsack problem”.

Greedy vs DP (Overview)

  • With DP: solve subproblems first, then use those solutions to make an optimal choice

With Greedy: make an optimal choice (without knowing solutions to subproblems) and then solve remaining subproblem(s)

DP solutions are bottom up; greedy are top down

Both apply to problems with optimal substructure: solution to larger problem contains solutions to (1 or more) subproblems

  • Example 2: Knapsack Capacity = 30

Item

A

B

C

Price

50

140

60

Size

5

20

10

Ratio

10

7

6

 

  • Possible greedy approach: take largest ratio: (Solution: A and B. Size=5+20=25. Price=50+140=190

Optimal: B and C. Size=20+10=30. Price=140+60=200

Greedy fractional: A, B, and half of C. Size=5+20+10*(5/10)=30. Price 50+140+60*(5/10) = 190+30 = 220

problem, we first compute the value per poundvi/wi for each item. Obeying a greedy strategy, the thief begins by taking as much as possible of the item with the greatest value per pound. If the supply of that item is exhausted and he can still carry more, he takes as much as possible of the item with the next greatest value per pound, and so forth until he can't carry any more. Thus, by sorting the items by value per pound, the greedy algorithm runs in O(n1gn) time. The proof that the fractional knapsack problem has the greedy-choice property is left as Exercise 17.2-1.

To see that this greedy strategy does not work for the 0-1 knapsack problem, consider the problem instance illustrated in Figure 17.2(a). There are 3 items, and the knapsack can hold 50 pounds. Item 1 weighs 10 pounds and is worth 60 dollars. Item 2 weighs 20 pounds and is worth 100 dollars. Item 3 weighs 30 pounds and is worth 120 dollars. Thus, the value per pound of item 1 is 6 dollars per pound, which is greater than the value per pound of either item 2 (5 dollars per pound) or item 3 (4 dollars per pound). The greedy strategy, therefore, would take item 1 first. As can be seen from the case analysis in Figure 17.2(b), however, the optimal solution takes items 2 and 3, leaving 1 behind. The two possible solutions that involve item 1 are both suboptimal.

For the comparable fractional problem, however, the greedy strategy, which takes item 1 first, does yield an optimal solution, as shown in Figure 17.2 (c). Taking item 1 doesn't work in the 0-1 problem because the thief is unable to fill his knapsack to capacity, and the empty space lowers the effective value per pound of his load. In the 0-1 problem, when we consider an item for inclusion in the knapsack, we must compare the solution to the subproblem in which the item is included with the solution to the subproblem in which the item is excluded before we can make the choice. The problem formulated in this way gives rise to many overlapping subproblems--a hallmark of dynamic programming, and indeed, dynamic programming can be used to solve the 0-1 problem. (See Exercise 17.2-2.)



CHAPTER 17: GREEDY ALGORITHMS

Algorithms for optimization problems typically go through a sequence of steps, with a set of choices at each step. For many optimization problems, using dynamic programming to determine the best choices is overkill; simpler, more efficient algorithms will do. A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. This chapter explores optimization problems that are solvable by greedy algorithms.



Greedy algorithms do not always yield optimal solutions, but for many problems they do. We shall first examine in Section 17.1 a simple but nontrivial problem, the activity-selection problem, for which a greedy algorithm efficiently computes a solution. Next, Section 17.2 reviews some of the basic elements of the greedy approach. Section 17.3 presents an important application of greedy techniques: the design of data-compression (Huffman) codes. In Section 17.4, we investigate some of the theory underlying combinatorial structures called "matroids" for which a greedy algorithm always produces an optimal solution. Finally, Section 17.5 illustrates the application of matroids using the problem of scheduling unit-time tasks with deadlines and penalties.

The greedy method is quite powerful and works well for a wide range of problems. Later chapters will present many algorithms that can be viewed as applications of the greedy method, including minimum-spanning-tree algorithms (Chapter 24), Dijkstra's algorithm for shortest paths form a single source (Chapter 25), and Chvátal's greedy set-covering heuristic (Chapter 37). Minimum spanning trees form a classic example of the greedy method. Although this chapter and Chapter 24 can be read independently of each other, you may find it useful to read them together.

Greedy algorithms do not always produce optimal solutions. However, GREEDY-ACTIVITY-SELECTOR always finds an optimal solution to an instance of the activity-selection problem.

Theorem 17.1

Algorithm GREEDY-ACTIVITY-SELECTOR produces solutions of maximum size for the activity-selection problem.



How can one tell if a greedy algorithm will solve a particular optimization problem? There is no way in general, but there are two ingredients that are exhibited by most problems that lend themselves to a greedy strategy: the greedy-choice property and optimal substructure.

Greedy-choice property



Optimal substructure

A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. This property is a key ingredient of assessing the applicability of dynamic programming as well as greedy algorithms.



Although the problems are similar, the fractional knapsack problem is solvable by a greedy strategy, whereas the 0-1 problem is not.

To see that this greedy strategy does not work for the 0-1 knapsack problem, 

Greedy versus dynamic programming

Because the optimal-substructure property is exploited by both greedy and dynamic-programming strategies, one might be tempted to generate a dynamic-programming solution to a problem when a greedy solution suffices, or one might mistakenly think that a greedy solution works when in fact a dynamic-programming solution is required. To illustrate the subtleties between the two techniques, let us investigate two variants of a classical optimization problem.



greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. This chapter explores optimization problems that are solvable by greedy algorithms.

“Dynamic Programming yields optimal solution for 0/1 Knapsack problem while Greedy approach does not. In addition, Greedy approach yields optimal solution for Fractional Knapsack problem”.

the greedy algorithm never waste the capesty  in the fractional knapsack problem as it does in 0-1 knapsack problem as a result it always yelds  an optimal solution.

Dynamic programming can be overkill; greedy

algorithms tend to be easier to code

 

Optimal substructure means that you can greedily solve subproblems and combine the solutions to solve the larger problem. The difference between dynamic programming and greedy algorithms is that the subproblems overlap. That means that by "memoizing" solutions to some subproblems, you can solve other subproblems more quickly.

 

They then give the example of the 0-1 knapsack vs. the fractional knapsack problem here. Both exhibit the optimal substructure property, but only the second also exhibits the greedy-choice property. Thus the second one can be solved to optimality with a greedy algorithm (or a dynamic programming algorithm, although greedy would be faster), but the first one requires dynamic programming or some other non-greedy approach. See also Henry's example in the other answer.

 

  • No one has ever found a 0/1 knapsack algorithm whose worst case time is better than exponential AND no one has proven that such an algorithm is not possible.

Thus, by sorting the items by value per pound, the greedy algorithm runs in O(n1gn) time. The proof that the fractional knapsack problem has the greedy-choice property is left as Exercise 17.2-1.

A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. This chapter explores optimization problems that are solvable by greedy algorithms.

Although the problems are similar, the fractional knapsack problem is solvable by a greedy strategy, whereas the 0-1 problem is not. To solve the fractional problem, we first compute the value per poundvi/wi for each item. Obeying a greedy strategy, the thief begins by taking as much as possible of the item with the greatest value per pound. If the supply of that item is exhausted and he can still carry more, he takes as much as possible of the item with the next greatest value per pound, and so forth until he can't carry any more. Thus, by sorting the items by value per pound, the greedy algorithm runs in O(n1gn) time. The proof that the fractional knapsack problem has the greedy-choice property is left as Exercise 17.2-1.

Read more: Graded Discussion Board (GDB) of CS502 from July 16, 2013 to July 1... http://vustudents.ning.com/forum/topics/graded-discussion-board-gdb...

guys upor jo data upload kea is mai he gdb ka answer hy ..... mai ny b isi sy bnya or submit b kra dia ap b try kro ..... u know krny sy milta hy ...:(

dont copy use it as idea

Attachments:

Salam to all. read this, this may help in preparing gdb

Regards

http://cse.unl.edu/~goddard/Courses/CSCE310J/Lectures/Lecture8-Dyna...

is main GDB ka answer ni hai ap book main 22 aur 27 lec deako sara kuch book main deiy hai Sir ne muje to is main koi cheez ni nazr ai GDB ki

This problem is a typical problem which is hard to solve. Although greed algorithms are usually more efficient then Dynamic Programming so preference is greedy algorithm because in this problem a fraction of any item may be chosen the following algorithm provides the optimal benefit: The greedy algorithm uses the maximum benefit per unit selection criteria

boht asan tha GDB samjh hi ab aya hy koi book dekhnay ki zrort nhi just apna idea bna sktay ho q ko samjho phr socho.

Dynamic programming algorithms are used for optimization (for example, finding the shortest
path between two points, or the fastest way to multiply many matrices). A dynamic programming
algorithm will examine all possible ways to solve the problem and will pick the best solution.
Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method
that enables us to go through all possible solutions to pick the best one. If the scope of the
problem is such that going through all possible solutions is possible and fast enough, dynamic
programming guarantees finding the optimal solution. The alternatives are many, such as using a
greedy algorithm, which picks the best possible choice. While a greedy algorithm does not
guarantee the optimal solution, it is faster. Fortunately, some greedy algorithms (such as
minimum spanning trees) are proven to lead to the optimal solution. The purpose of this paper is
to analyze several algorithm design paradigms applied to single problem--0/1 Knapsack
Problem. The Knapsack Problem is a combinatorial optimization problem where one has to
maximize the benefits of objects in a knapsack without exceeding its capacity. It is an NPcomplete problem and uses exact and heuristic techniques to get solved. The objective is to
analyze that how the various techniques like Dynamic Programming and Greedy Algorithm
affect the performance of Knapsack Problem.

RSS

Latest Activity

Mani Siddiqui BS VIII replied to Mani Siddiqui BS VIII's discussion سنا کرو مری جاں ان سے ان سے افسانے
21 minutes ago
SHEHRAX replied to + M.Tariq Malik's discussion MGT301 Current Final Term Papers Fall 2019 (15 to 26 February 2020) & All Solved Past Papers, Solved MCQs & Helping Material in the group MGT301 Principles of Marketing
21 minutes ago
SHEHRAX joined + M.Tariq Malik's group
21 minutes ago
Mani Siddiqui BS VIII replied to Mani Siddiqui BS VIII's discussion سنا کرو مری جاں ان سے ان سے افسانے
21 minutes ago
Mani Siddiqui BS VIII replied to Mani Siddiqui BS VIII's discussion سنا کرو مری جاں ان سے ان سے افسانے
21 minutes ago
+ ! ! ! Dream girl ! ! ! + replied to + Abdul Rehman Aทઽα૨i's discussion Insan Ka Character Kin Cheezo Say Banta Hay?
1 hour ago
مخلص posted a discussion
2 hours ago
bcvu786 replied to + M.Tariq Malik's discussion MGT503 Current Final Term Papers Fall 2019 (15 to 26 February 2020) & All Solved Past Papers, Solved MCQs & Helping Material in the group MGT503 Principles of Management
2 hours ago
Rana Ali replied to Rana Ali's discussion Zaroori tha
4 hours ago
Rana Ali liked Rana Ali's discussion Zaroori tha
4 hours ago
Rana Ali posted a discussion
4 hours ago
Rana Ali replied to Rana Ali's discussion EDU604 Assignment No 1Fall 2019 in the group EDU604 Comparative Education
4 hours ago

© 2020   Created by + M.Tariq Malik.   Powered by

Promote Us  |  Report an Issue  |  Privacy Policy  |  Terms of Service

.