We are here with you hands in hands to facilitate your learning & don't appreciate the idea of copying or replicating solutions. Read More>>
+ Link For Assignments, GDBs & Online Quizzes Solution
+ Link For Past Papers, Solved MCQs, Short Notes & More
Dear Students! Share your Assignments / GDBs / Quizzes files as you receive in your LMS, So it can be discussed/solved timely. Add Discussion
How to Add New Discussion in Study Group ? Step By Step Guide Click Here.
.+ http://bit.ly/vucodes (Link for Assignments, GDBs & Online Quizzes Solution)
+ http://bit.ly/papersvu (Link for Past Papers, Solved MCQs, Short Notes & More)+ Click Here to Search (Looking For something at vustudents.ning.com?) + Click Here To Join (Our facebook study Group)
redix sort is done ...
Radix Sort is best sorting algorithm when data Million of users
Also you have to understand, that O(f(n)) really means in order of K*f(n), where K is some arbitrary constant. For radix sort this K happens to be quite big (at least order of number of bits in the integers sorted), on the other hand quicksort has one of the lowest K among all sorting algorithms and average complexity of n*log(n). Thus in real life scenario quicksort will be very often faster than radix sort.
Heap Sorting Algorithm is the right answer
I will choose quick sort algorithm Quick Sort is fast compared with other sorting algorithms due to its cache-friendly. When QS processes a segment of an array, it accesses elements at the beginning and end of the segment, and moves towards the center of the segment.
Radix sort is slower for real world use cases because of it’s complexity of the algorithm. If items are unique, k >= log (n). Even with duplicate items, the set of problems where k < log(n) is small..Second is the additional memory requirement.
If I would summarize in a concise way then there are two mainly reasons:
1. its average time complexity is O (n log n).
2. it’s an in-place sorting algorithm.
The test is being attempted by millions of users from all over the globe each time so speed has having a great importance and we know quick sort is very fast. After conducting the test, the company has to publish the results in sorted (descending order) form for ranking of its users which comes under the category of comparison based sorting.
Radix sort is slower for (most) real world use cases.
One reason is the complexity of the algorithm:
If items are unique, k >= log(n). Even with duplicate items, the set of problems where k < log(n) is small.
Another is the implementation:
The additional memory requirement (which in it self is a disadvantage), affects cache performance negatively.
I think it is safe to say that many libraries, like the standard library, use Quicksort because it performs better in the majority of cases. I don't think that "difficult implementation" or "less intuitive" are major factors.
1. Quicksort/Introsort is more flexible:
Quicksort and Introsort work well with all kinds of data. All you need for sorting is the possibility to compare items. This is trivial with numbers but you can sort other data as well.
Radix sort on the other hand just sorts things by their binary representation. It never compares items against each other.
Radix sort needs more memory.
All radix sort implementations that I've seen use a secondary buffer to store partial sorting results. This increases the memory requirements of the sorting algorithm. That may not be a problem if you only sort a couple of kilobytes, but if you go into the gigabyte range it makes a huge difference.
If I remember right a in place radix-sort algorithm exist on paper though.
The topic of the efficiency of radix sort compared to other sorting algorithms is somewhat tricky and subject to quite a lot of misunderstandings. Whether radix sort is equally efficient, less efficient or more efficient than the best comparison-based algorithms depends on the details of the assumptions made. Radix sort efficiency is O(d·n) for n keys which have d or fewer digits. Sometimes d is presented as a constant, which would make radix sort better (for sufficiently large n) than the best comparison-based sorting algorithms, which are all O(n·log(n)) number of comparisons needed. However, in general d cannot be considered a constant. In particular, under the common (but sometimes implicit) assumption that all keys are distinct, then d must be at least of the order of log(n), which gives at best (with densely packed keys) a time complexity O(n·log(n)). That would seem to make radix sort at most equally efficient as the best comparison-based sorts (and worse if keys are much longer than log(n)).
Quick sort is one of the most fastest sorting algorithms.
This is mainly due to 2 reasons:
1. It's average time complexity is O(n log n).
2. it’s an in-place sorting algorithm.
Quicksort-Heapsort in-place hybrids are really interesting, too, since most of them only needs n*log n comparisons in the worst case (they are optimal with respect to the first term of the asymptotics, so they avoid the worst-case scenarios of Quicksort), O(log n) extra-space and they preserve at least "a half" of the good behaviour of Quicksort with respect to already-ordered set of data
Select a pivot p as the median of a random sample of sqrt(n) elements (this can be done in at most 24 sqrt(n) comparisons through the algorithm of Tarjan&co, or 5 sqrt(n) comparisons through the much more convoluted spider-factory algorithm of Schonhage);
Partition your array in two parts as in the first step of Quicksort;
Heapify the smallest part and use O(log n) extra bits to encode a heap in which every left child has a value greater than its sibling;
Recursively extract the root of the heap, sift down the lacune left by the root until it reaches a leaf of the heap, then fill the lacune with an appropriate element took from the other part of the array;
Recur over the remaining non-ordered part of the array (if p is chosen as the exact median, there is no recursion at all).
For situations where you DO want max speed in most cases, QuickSort may be preferred over HeapSort, but neither may be the right answer. For speed-critical situations, it is worth examining closely the details of the situation. For example, in some of my speed-critical code, it is very common that the data is already sorted or near-sorted (it is indexing multiple related fields that often either move up and down together OR move up and down opposite each other, so once you sort by one, the others are either sorted or reverse-sorted or close... either of which can kill QuickSort). For that case, I implemented neither... instead, I implemented Dijkstra's SmoothSort... a HeapSort variant that is O(N) when already sorted or near-sorted... it is not so elegant, not too easy to understand, but fast...
es men solution kahan tashrif rakhta he ?