1. The quicksort algorithm has a worst-case running time of θ(n^2) on an input array of n numbers. Despite this slow worst-case running time, quicksort is often the best practical choice for sorting because it is remarkably efficient on the average: its expected running time is θ(n lg n), and the constant factors hidden in the θ(n lg n) notation are quite small. It also has the advantage of sorting in place
and it works well even in virtual-memory environments.
2. Quicksort, like merge sort, applies the divide-and-conquer paradigm. The following is the three-step divide-and-conquer process for sorting a typical subarray A[p...r]:
1) Divide: Partition (rearrange) the array A[p...r] into two (possibly empty) subarrays A[p...q-1] and A[q+1...r] such that each element of A[p...q-1] is
less than or equal to A[q], which is, in turn, less than or equal to each element of A[q+1...r]. Compute the index q as part of this partitioning method.
2) Conquer: Sort the two subarrays A[p...q-1] and A[q+1...r] by recursive calls to quicksort.
3) Combine: Because the subarrays are already sorted, no work is needed to combine them: the entire array A[p...r] is now sorted.
3. Implementation of quick sort:
// sort arr[start...end-1]
void quickSort(int[] arr, int start, int end) {
assert start < end;
assert end <= arr.length;
int q = partition(arr, start, end);
quickSort(arr, start, q);
quickSort(arr, q+1, end);
}
To sort an entire array A, the initial call is quickSort(A, 0, A.length).
4. Implementation of partition method:
// use the last element to partition the array arr[start...end-1]
int partition(int[] arr, int start, int end){
int pivot = arr[--end]
int i = start - 1;
for ( int j = start; j < end ; j ++ ) {
if ( arr[j] < pivot ) {
// swap i + 1 with j
int temp = arr[++i];
arr[i] = arr[j];
arr[j] = temp;
}
}
// swap i + 1 with end
int temp = arr[++i];
arr[i] = arr[end];
arr[end] = temp;
return i;
}
partition always selects the last element as a pivot element around which to partition the array from start to end-1. During the loop, j is used to traverse the array while i indicates the boundary of those smaller or equal to pivot and those greater than pivot for all traversed elements. The running time is θ(n) where n = end - start.
5. The running time of quicksort depends on whether the partitioning is balanced or unbalanced, which in turn depends on which elements are used for partitioning.
6. The worst-case behavior for quicksort occurs when the partitioning routine produces one subproblem with n-1 elements and one with 0 elements. Let us assume that this unbalanced partitioning arises in each recursive call. Then the running time T(n) = T(n-1) + T(0) + θ(n) = T(n-1) + θ(n) = θ(n^2). The worst case occurs when the input array is already completely sorted.
7. By equally balancing the two sides of the partition at every level of the recursion, we get an asymptotically faster algorithm. In the most even possible split, partition produces two subproblems, each of size no more than n/2, since one is of size ⌊n/2⌋ and one of size ⌈n/2⌉-1. The recurrence for the running time is then T(n) = 2T(n/2) + θ(n) = θ(n lgn).
8. Any split of constant proportionality yields a recursion tree of depth θ(lg n), where the cost at each level is O(n). The running time is therefore O(n lg n) whenever the split has constant proportionality.
9. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits:
The combination of the bad split followed by the good split produces three subarrays of sizes 0, (n-1)/2-1 and (n-1)/2 at a combined partitioning cost
of θ(n)+θ(n-1)=θ(n). Certainly, this situation is no worse than a single level of partitioning that produces two subarrays of size (n-1)/2, at a cost of θ(n). Intuitively, the θ(n-1) cost of the bad split can be absorbed into the θ(n) cost of the good split, and the resulting split is good. Thus, the running time of quicksort, when levels alternate between good and bad splits, is like the running time for good splits alone: still O(n lg n), but with a slightly larger constant hidden by the O-notation.
10. For the randomized version of quick sort , instead of always using the last element as the pivot, we will select a randomly chosen element from the subarray.
We do so by first exchanging last element with an element chosen at random
from the subarray. By randomly sampling the subarray, we ensure that the pivot
element is equally likely to be any of the element in the subarray. Because we randomly choose the pivot element, we expect the split of the input array to be reasonably well balanced on average.
11. The running time of quick sort is dominated by the time spent in the partition
method. Each time the partition method is called, it selects a pivot element, and this element is never included in any future recursive calls to quickSort and partion. Thus, there can be at most n calls to partition over the entire execution of the quicksort algorithm. One call to partition takes O(1) time plus an amount of time that is proportional to the number of comparisons performed in partition method. We will not attempt to analyze how many comparisons are made in each call to partition. Rather, we will derive an overall bound on the total number of comparisons. For ease of analysis, we
rename the elements of the array A as z1, z2, ... , zn, with zi being the ith smallest element. We also define the set Zij ={zi, zi+1, ... ,zj } to be the set of elements between zi and zj , inclusive. Each pair of elements is compared at most once. Because elements are compared only to the pivot element and, after a particular call of partition finishes, the pivot element used in that call is never again compared to any other elements. We define indicator random variable :
Xij = I {zi is compared to zj }
So the total number of comparisons performed by the algorithm is :
X = ∑(i=1 to n-1) { ∑(j=i+1 to n) { Xij } }
By linearity of expectation :
E(X) = ∑(i=1 to n-1) { ∑(j=i+1 to n) { E(Xij) } } = ∑(i=1 to n-1) { ∑(j=i+1 to n) { Pr{zi is compared to zj } } }
once a pivot x is chosen with zi < x < zj , we know that zi and zj cannot be compared at any subsequent time. If, on the other hand, zi is chosen as a pivot before any other item in Zij, then zi will be compared to each item in Zij , except for itself. Similarly, if zj is chosen as a pivot before any other item in Zij , then zj will be compared to each item in Zij , except for itself. Since any element of Zij is equally likely to be the first one chosen as a pivot independently, we have :
P{ zi is compared to zj }
= P{ zi or zj is first pivot chosen from Zij }
= P{ zi is first pivot chosen from Zij } + Pr{zj is first pivot chosen from Zij }
= 1/ (j - i + 1) + 1/ (j - i + 1) = 2/(j - i + 1)
So, we have :
E(X) = ∑(i=1 to n-1) { ∑(j=i+1 to n) { 2/(j - i + 1) } }
= ∑(i=1 to n-1) { ∑(z=1 to n-i) { 2(z + 1) } } (z = j-i)
< ∑(i=1 to n-1) { ∑(z=1 to n) { 2/z } }
= ∑(i=1 to n-1) O(lg n) = O(n lg n)
- 大小: 17.2 KB
- 大小: 10.6 KB
分享到:
相关推荐
C# 快速排序源码, 包含调用方式和注释,QuickSort.cs ///调用方式 /// /// int[] arr = new int[] { 5,3,9,6,4,7,8,1,2}; /// QuickSort.quickSort(arr, 0, arr.Length - 1); /// ///
该程序包含7大排序算法: # sort.bubbleSort() #冒泡排序 # sort.shellSort() #希尔排序 # sort.insertionSort() #插入排序 # sort.Selectionsort1() #选择排序 # sort.heapSort() #堆排序 # sort.countSort() ...
### 7. Quicksort (快速排序) 快速排序是一种高效的排序算法,采用分治策略来把一个序列分为较小和较大的两个子序列。该算法首先选择一个基准元素,然后将数组分为两部分,一部分的所有元素都比基准小,另一部分的...
#### 7. Quicksort 快速排序是一种分治策略的典型应用,它的平均时间复杂度同样为O(n log n)。本章深入剖析了快速排序的算法设计思路,包括如何选择枢轴、分区操作以及递归调用的细节。此外,还讨论了快速排序的...
quickSort
快速排序(Quicksort)是一种高效的排序算法,它由C.A.R. Hoare在1960年提出。快速排序采用的是分治法的策略来把一个序列分为较小和较大的两个子序列,然后递归地排序两个子序列。 快速排序算法的基本步骤是:从...
在提供的文件中,`quicksort.c`可能是C语言实现的快速排序算法,而`quicksort.s`则是该C代码对应的MIPS汇编语言版本。C语言版本通常更易于理解和编写,而汇编语言版本则更接近底层硬件,性能可能更高,但编写和调试...
例如,数组元素求和可以使用`sum`方法,最大值通过`max`获取,快速排序可以借助`scala.util.Sorting.quickSort`,而将数组转换为字符串可以用`mkString`,如: ```scala val a = Array(1, 2, 3, 4, 5) val sum = a....
quicksort 快速排序伪代码(非随机) 下面的过程实现快速排序: QUICKSORT(A,p,r) 1 if p 2 then q ←PARTITION(A,p,r) 3 QUICKSORT(A,p,q-1) ... 7 exchange A[i+1]←→A[r] 8 return i+1
QuickSort 6. MergeSort 链表 1. InsertAtEnd 2. InsertAtBeginning 3. InsertAtPos 4. DeleteAtEnd 5. DeleteAtBeginning 6. DeleteAtPos 7. Traversal 8. Search 9. Reversing LinkedList 10. Detecting Loop 树 1...
快速排序c++.int main() { std::vector<int> arr = {12, 7, 11, 9, 3, 5}; std::cout ; for (int num : arr) { std::cout ; } std::cout ; // 调用快速排序算法 quickSort(arr); std::cout ; for (int num : arr) { ...
7. exchange A[i + 1] <-> A[r] 8. return i + 1 ``` 在这个过程中,`PARTITION`函数是核心,它负责找到基准元素应该放置的位置,使得基准左侧的元素都小于基准,右侧的元素都大于基准。在C语言中,有多种实现方式...
7. 关于课程资源的提及:课程的网站(***)和教材作者(Robert Sedgewick 和 Kevin Wayne)被提及,表明课程提供了详细的学习资源。 8. 课程练习和问题解答:文档提到了课程提供了练习题目,学生可以通过解答这些...
4.6 Quicksort: Sorting by Randomization . . . . . . . . . . . . . . . 123 4.7 Distribution Sort: Sorting via Bucketing . . . . . . . . . . . . . . 129 4.8 War Story: Skiena for the Defense . . . . . ....
7.publicstaticvoidmain(String[]args){ 8.int[]i={1,5,6,12,4,9,3,23,39,403,596,87}; 9.System.out.println("----冒泡排序的结果:"); 10.maoPao(i); 11.System.out.println(); 12.System.out.println("----选择...
在`quickSort`中,我们根据基准元素的新位置,对左右两个子序列递归调用`quickSort`。 `README.txt`文件可能包含对该代码的解释或使用说明,例如如何运行或测试代码,以及可能的注意事项和优化建议。 快速排序的...
然而,JDK7以后的Java开发工具包中内置的排序算法已经不再是传统意义上的快速排序,而是被Dual-Pivot快速排序(双枢轴快速排序)所取代。这种新算法由俄罗斯工程师Vladimir Yaroslavskiy在2009年提出,它是在传统...
(2) What are the maximum and minimum number of comparisons will Quicksort do on a list of n elements, give an instance for maximum and minimum case respectively. Give a divide and conquer algorithm ...
要运行这个Java程序,你需要创建一个包含待排序整数的数组,然后调用`QuickSort.quickSort`方法。例如: ```java public class Main { public static void main(String[] args) { int[] arr = {9, 5, 1, 7, 3, 8,...
在Linux环境中,使用vim编辑器打开这个文件,例如`vim quicksort.c`,然后在命令模式下输入`:wq`保存并退出。接着,使用gcc编译代码,如`gcc -o quicksort quicksort.c`,这将生成一个名为`quicksort`的可执行文件。...