找回密码
 欢迎注册
查看: 42745|回复: 26

[原创] 位排序 -- 基于计数排序和基数排序产生的特例

[复制链接]
发表于 2009-10-30 11:21:18 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?欢迎注册

×
精华
原创地址:http://blog.csdn.net/Silitex/archive/2009/10/30/4746697.aspx 鄙人在很多排序虽然懂,但真正动起手来还是错漏百出,后来下定决心,这段时间在研究排序算法,后来根据计数排序和基数排序,想到一个他们的特例 -- 位排序。 位排序 --- 时间复杂度:O(n*logm), 最好和最坏的情况下都是这个时间复杂度 (此排序算法总是认为m <= n的) 空间复杂度:O(logm), 如果m小于4294967296, 即最大临时空间为32的倍数 运行时间:经过大量的数据测试,位排序的运行时间仅次于计数排序和快速排序 算法的优点:能够在空间复杂度为O(logm)上面实现O(n*logm)的算法,空间上比快速排序和堆排序的最坏情况(最坏情况其空间复杂度都为O(n))下要好 算法的缺点:m <= n 源代码:
  1. void BitSortRecursive(ELEM_TYPE *pData, int nLow, int nHigh, UNIQUE_HASH_CALLBACK pUniqueHashFunc, int nBit)
  2. {
  3. int i;
  4. int j;
  5. int nAndMask;
  6. if (nLow >= nHigh)
  7. {
  8. return;
  9. }
  10. else if (nLow+1 == nHigh) // 只剩下两个元素, 那么直接比较就可以了
  11. {
  12. if (pUniqueHashFunc(pData[nLow].eKey) > pUniqueHashFunc(pData[nHigh].eKey))
  13. DsSwapElem(&pData[nLow], &pData[nHigh]);
  14. return;
  15. }
  16. for (i = nLow, j = nHigh, nAndMask = 1<<(nBit-1); ; )
  17. {
  18. for (; i <= nHigh; i++)
  19. {
  20. if ((pUniqueHashFunc(pData[i].eKey)&nAndMask) != 0)
  21. break;
  22. }
  23. for (; j > i; j--)
  24. {
  25. if ((pUniqueHashFunc(pData[j].eKey)&nAndMask) == 0)
  26. break;
  27. }
  28. if (j > i) // 表明还找到两个这样的数
  29. {
  30. DsSwapElem(&pData[i++], &pData[j--]);
  31. if (j < i)
  32. break;
  33. }
  34. else
  35. {
  36. break;
  37. }
  38. }
  39. if (nBit > 1)
  40. {
  41. nBit--;
  42. BitSortRecursive(pData, nLow, i-1, pUniqueHashFunc, nBit);
  43. BitSortRecursive(pData, i, nHigh, pUniqueHashFunc, nBit);
  44. }
  45. }
  46. // 位排序 -- 由计数排序和基数排序引申出的一种排序: 位排序
  47. // 计数排序当m的取值范围从0~65535时,申请的空间不大,排序算法是相当好用的
  48. // 当如果m的取值范围从0~4294967295时,要申请的空间就太大了,不过可以根据基数排序的思想
  49. // 对于HighWord先采用计数排序,然后再对LowWord再次计数排序,这个就是基数排序的思想
  50. // 如果有人认为计数排序64K的空间申请太大了,那么缩小成2^8=256(即8位计数排序),这个空间是很小的
  51. // 如果空间还想再小: 2^4, 2^2, 2^1..., 最后,产生了一个特例,就是2^0的位排序
  52. // 其排序方法:
  53. // 如果m取值范围0~65535, 先从左往右找到第一个最高位为1的数,然后从右往左找一个最高位为0的数,交换之, 直至交换完毕,那么最高位就有序了
  54. // 再来次高位, ..., 最后一位
  55. // 根据前面的描述,算法空间复杂度: O(logm), 时间复杂度O(nlogm)(最好和最坏都一样的时间复杂度)
  56. // 经过大量的数据测试,位排序的运行时间仅次于计数排序和快速排序
  57. // E-mail: silitex@yeah.net
  58. void BitSort(ELEM_TYPE *pData, int nLen, UNIQUE_HASH_CALLBACK pUniqueHashFunc, int nUniqueHashRange)
  59. {
  60. int m = nUniqueHashRange-1;
  61. int nBit;
  62. // Get log2(m)
  63. for (nBit = 0; m > 0; nBit++)
  64. m >>= 1;
  65. if (nBit == 0)
  66. return;
  67. BitSortRecursive(pData, 0, nLen-1, pUniqueHashFunc, nBit);
  68. }
复制代码
================================================================ 下面给出完整的测试代码 ================================================================
  1. // 编译器VC6.0
  2. // 需要设置支持MFC
  3. // E-mail: silitex@yeah.net
  4. #include <afxtempl.h>
  5. #include <stdio.h>
  6. /* The macro definition */
  7. #define _DEBUG_OUT
  8. typedef int ELEM_KEY_TYPE; // 关键字段类型
  9. typedef int ELEM_REST_TYPE; // 剩余字段类型
  10. typedef unsigned char byte;
  11. typedef unsigned short int word;
  12. typedef unsigned long int dword;
  13. #define DS_BUBBLE_SORT_DATA_MAX 100000
  14. #define DS_BUBBLE_SORT_DATA_RANGE (DS_BUBBLE_SORT_DATA_MAX)
  15. // 单条记录的结构
  16. typedef struct {
  17. ELEM_KEY_TYPE eKey; // 记录的主关键字段
  18. ELEM_REST_TYPE eRest[7]; // 记录的剩余字段, 主关键字与其他的比例暂时按照1:7分配
  19. } ELEM_TYPE;
  20. typedef int (*UNIQUE_HASH_CALLBACK)(ELEM_KEY_TYPE eKey);
  21. typedef int (*SHELL_GAP_CALLBACK)(int nGap);
  22. // 交换两个记录的数据
  23. void DsSwapElem(ELEM_TYPE *pElem1, ELEM_TYPE *pElem2)
  24. {
  25. ELEM_TYPE eTemp;
  26. eTemp = *pElem2;
  27. *pElem2 = *pElem1;
  28. *pElem1 = eTemp;
  29. }
  30. // 复制记录数据
  31. void DsCopyElem(ELEM_TYPE *pElemDst, ELEM_TYPE *pElemSrc)
  32. {
  33. *pElemDst = *pElemSrc;
  34. }
  35. // Windows系统下使用的真随机函数
  36. // By Silitex, at 2009/09/17
  37. DWORD WinRand(void)
  38. {
  39. LARGE_INTEGER Counter;
  40. QueryPerformanceCounter(&Counter);
  41. return Counter.LowPart;
  42. }
  43. // 用于测试的一一对应的Hash函数
  44. int UniqueHashTest(ELEM_KEY_TYPE eKey)
  45. {
  46. return eKey;
  47. }
  48. // 希尔排序的Gap选取测试函数
  49. // 已知的最好nGap序列是: 1, 4, 10, 23, 57, 132, 301, 701, 1750,...
  50. // 在1750之后的序列值按如下公式计算: next_step = round(step * 2.3)
  51. int ShellGapTest(int nGap)
  52. {
  53. const int GAP_SEQ[] = {0, 1, 4, 10, 23, 57, 132, 301, 701, 1750, 4025, 9257, 21291, 48969, 112628, 259044};
  54. int i;
  55. int j;
  56. for (i = 0, j = 0; GAP_SEQ[i] < nGap; i++)
  57. j = i;
  58. return GAP_SEQ[j];
  59. }
  60. void DebugOut(ELEM_TYPE *pData, int nLen)
  61. {
  62. int i;
  63. printf("\n长度:%d\n 数据:", nLen);
  64. for (i = 0; i < nLen; i++)
  65. printf("%d ", pData[i].eKey);
  66. printf("\n");
  67. }
  68. // 计数排序
  69. // 计数排序适用于诸如取值个数只有m个(m远小于n)的场合(比如10000个学生的成绩范围从0~100之间取值)
  70. // 在正常情况下,含有其他非关键字段,并且关键字段的取值并非连续的m个(比如取值0, 10, 100, 101, ...等无规律的数值),
  71. // 这时计数排序的时间复杂度为O(nlogm); 如果要排序的数组只有一个字段,那么其时间复杂度可以直接为O(n)
  72. // 当然研究算法,就从最普通的现象研究,在普通情况下,取m个值,我们通过唯一数值Hash算法(比如,0~0, 10~1, 100~2, 101~3)
  73. // 进行这个Hash算法的最快的时间复杂度就是O(logm)了,因为对于无规律的取值来说,最快的就是二分查找算法了
  74. // 所以一个标准的计数排序除了传入待排序的数组及数组长度外,还需要传入对这个数组的Hash函数指针和Hash以后的样本空间
  75. // (这里规定Hash以后的样本空间只能够取值范围为0~m-1)
  76. // 计数排序需要O(m)个辅助空间
  77. // 计数排序还发现一个特点,就是即使m=n的情况下,虽然会浪费空间,但可以得到比快速排序还要少的交换次数
  78. // E-mail: silitex@yeah.net
  79. void CountSort(ELEM_TYPE *pData, int nLen, UNIQUE_HASH_CALLBACK pUniqueHashFunc, int nUniqueHashRange)
  80. {
  81. int *ElemCount = new int[nUniqueHashRange]; // 统计某一个值的出现次数
  82. int *ElemStartPosi = new int[nUniqueHashRange]; // 某一个值的开始地址
  83. int *ElemIncPosi = new int[nUniqueHashRange]; // 某一个值的增量地址
  84. int ElemPosi;
  85. int i;
  86. int nPosi;
  87. int nNewPosi;
  88. int nTemp;
  89. // 初始化统计数组
  90. for (i = 0; i < nUniqueHashRange; i++)
  91. {
  92. ElemCount[i] = 0;
  93. ElemStartPosi[i] = 0;
  94. ElemIncPosi[i] = 0;
  95. }
  96. // 统计每一个值出现的频率
  97. for (i = 0; i < nLen; i++)
  98. ElemCount[pUniqueHashFunc(pData[i].eKey)]++;
  99. // 把每次的计数相加,得到实际想要的位置
  100. ElemPosi = 0;
  101. for (i = 1; i < nUniqueHashRange; i++)
  102. {
  103. ElemPosi += ElemCount[i-1];
  104. ElemStartPosi[i] = ElemPosi;
  105. }
  106. // 根据出现的频率进行交换它们的位置
  107. for (nPosi = 0; nPosi < nLen; nPosi++)
  108. {
  109. while (TRUE)
  110. {
  111. nTemp = pUniqueHashFunc(pData[nPosi].eKey);
  112. if (nPosi >= ElemStartPosi[nTemp] && nPosi < ElemStartPosi[nTemp]+ElemCount[nTemp])
  113. break;
  114. while (ElemIncPosi[nTemp] < ElemCount[nTemp])
  115. {
  116. nNewPosi = ElemStartPosi[nTemp] + ElemIncPosi[nTemp];
  117. ElemIncPosi[nTemp]++;
  118. if (pUniqueHashFunc(pData[nNewPosi].eKey) != nTemp)
  119. {
  120. DsSwapElem(&pData[nPosi], &pData[nNewPosi]);
  121. break;
  122. }
  123. }
  124. }
  125. }
  126. delete []ElemCount;
  127. delete []ElemStartPosi;
  128. delete []ElemIncPosi;
  129. }
  130. // 对严蔚敏书上算法的改进, 严蔚敏的算法为了节约一个变量而造成了运行时间的浪费,划不来
  131. // 这种方法的比较次数是最少的,移动次数也是最少的
  132. static int QuickSortPartition(ELEM_TYPE *pData, int nLow, int nHigh)
  133. {
  134. ELEM_TYPE eTemp;
  135. int nPrivot;
  136. DsCopyElem(&eTemp, &pData[nLow]);
  137. nPrivot = nLow++;
  138. while (nLow <= nHigh)
  139. {
  140. if (nLow > nPrivot) // 照样可以用(nHigh > nPrivot)
  141. {
  142. while (nLow <= nHigh)
  143. {
  144. if (pData[nHigh].eKey < eTemp.eKey)
  145. {
  146. DsCopyElem(&pData[nPrivot], &pData[nHigh]);
  147. nPrivot = nHigh--;
  148. break;
  149. }
  150. nHigh--;
  151. }
  152. }
  153. else
  154. {
  155. while (nLow <= nHigh)
  156. {
  157. if (pData[nLow].eKey > eTemp.eKey)
  158. {
  159. DsCopyElem(&pData[nPrivot], &pData[nLow]);
  160. nPrivot = nLow++;
  161. break;
  162. }
  163. nLow++;
  164. }
  165. }
  166. }
  167. DsCopyElem(&pData[nPrivot], &eTemp);
  168. return nPrivot;
  169. }
  170. static void QuickSortRecursive(ELEM_TYPE *pData, int nLow, int nHigh)
  171. {
  172. int nPrivot;
  173. if (nLow < nHigh)
  174. {
  175. nPrivot = QuickSortPartition(pData, nLow, nHigh);
  176. QuickSortRecursive(pData, nLow, nPrivot-1);
  177. QuickSortRecursive(pData, nPrivot+1, nHigh);
  178. }
  179. }
  180. // 快速排序
  181. // E-mail: silitex@yeah.net
  182. void QuickSort(ELEM_TYPE *pData, int nLen)
  183. {
  184. QuickSortRecursive(pData, 0, nLen-1);
  185. }
  186. void BitSortRecursive(ELEM_TYPE *pData, int nLow, int nHigh, UNIQUE_HASH_CALLBACK pUniqueHashFunc, int nBit)
  187. {
  188. int i;
  189. int j;
  190. int nAndMask;
  191. if (nLow >= nHigh)
  192. {
  193. return;
  194. }
  195. else if (nLow+1 == nHigh) // 只剩下两个元素, 那么直接比较就可以了
  196. {
  197. if (pUniqueHashFunc(pData[nLow].eKey) > pUniqueHashFunc(pData[nHigh].eKey))
  198. DsSwapElem(&pData[nLow], &pData[nHigh]);
  199. return;
  200. }
  201. for (i = nLow, j = nHigh, nAndMask = 1<<(nBit-1); ; )
  202. {
  203. for (; i <= nHigh; i++)
  204. {
  205. if ((pUniqueHashFunc(pData[i].eKey)&nAndMask) != 0)
  206. break;
  207. }
  208. for (; j > i; j--)
  209. {
  210. if ((pUniqueHashFunc(pData[j].eKey)&nAndMask) == 0)
  211. break;
  212. }
  213. if (j > i) // 表明还找到两个这样的数
  214. {
  215. DsSwapElem(&pData[i++], &pData[j--]);
  216. if (j < i)
  217. break;
  218. }
  219. else
  220. {
  221. break;
  222. }
  223. }
  224. if (nBit > 1)
  225. {
  226. nBit--;
  227. BitSortRecursive(pData, nLow, i-1, pUniqueHashFunc, nBit);
  228. BitSortRecursive(pData, i, nHigh, pUniqueHashFunc, nBit);
  229. }
  230. }
  231. // 位排序 -- 由计数排序和基数排序引申出的一种排序: 位排序
  232. // 计数排序当m的取值范围从0~65535时,申请的空间不大,排序算法是相当好用的
  233. // 当如果m的取值范围从0~4294967295时,要申请的空间就太大了,不过可以根据基数排序的思想
  234. // 对于HighWord先采用计数排序,然后再对LowWord再次计数排序,这个就是基数排序的思想
  235. // 如果有人认为计数排序64K的空间申请太大了,那么缩小成2^8=256(即8位计数排序),这个空间是很小的
  236. // 如果空间还想再小: 2^4, 2^2, 2^1..., 最后,产生了一个特例,就是2^0的位排序
  237. // 其排序方法:
  238. // 如果m取值范围0~65535, 先从左往右找到第一个最高位为1的数,然后从右往左找一个最高位为0的数,交换之, 直至交换完毕,那么最高位就有序了
  239. // 再来次高位, ..., 最后一位
  240. // 根据前面的描述,算法空间复杂度: O(logm), 时间复杂度O(nlogm)(最好和最坏都一样的时间复杂度)
  241. // 经过大量的数据测试,位排序的运行时间仅次于计数排序和快速排序
  242. // E-mail: silitex@yeah.net
  243. void BitSort(ELEM_TYPE *pData, int nLen, UNIQUE_HASH_CALLBACK pUniqueHashFunc, int nUniqueHashRange)
  244. {
  245. int m = nUniqueHashRange-1;
  246. int nBit;
  247. // Get log2(m)
  248. for (nBit = 0; m > 0; nBit++)
  249. m >>= 1;
  250. if (nBit == 0)
  251. return;
  252. BitSortRecursive(pData, 0, nLen-1, pUniqueHashFunc, nBit);
  253. }
  254. // 测试内存排序
  255. void TestEmsSort(void)
  256. {
  257. CArray<ELEM_TYPE, ELEM_TYPE> aTest;
  258. CArray<ELEM_TYPE, ELEM_TYPE> aCopy;
  259. ELEM_TYPE eTemp;
  260. UINT i;
  261. for (i = 0; i < DS_BUBBLE_SORT_DATA_MAX; i++)
  262. {
  263. eTemp.eKey = WinRand()%DS_BUBBLE_SORT_DATA_RANGE;
  264. aCopy.Add(eTemp);
  265. }
  266. #if 0
  267. aTest.Copy(aCopy);
  268. BubbleSort(aTest.GetData(), aTest.GetSize());
  269. aTest.Copy(aCopy);
  270. BubbleSortDouble(aTest.GetData(), aTest.GetSize());
  271. aTest.Copy(aCopy);
  272. GnomeSort(aTest.GetData(), aTest.GetSize());
  273. aTest.Copy(aCopy);
  274. OddEvenSort(aTest.GetData(), aTest.GetSize());
  275. aTest.Copy(aCopy);
  276. SelectSort(aTest.GetData(), aTest.GetSize());
  277. aTest.Copy(aCopy);
  278. InsertSort(aTest.GetData(), aTest.GetSize());
  279. aTest.Copy(aCopy);
  280. BinInsertSort(aTest.GetData(), aTest.GetSize());
  281. #endif
  282. #if 0
  283. aTest.Copy(aCopy);
  284. ShellSort(aTest.GetData(), aTest.GetSize(), ShellGapTest);
  285. aTest.Copy(aCopy);
  286. MergeSort(aTest.GetData(), aTest.GetSize());
  287. aTest.Copy(aCopy);
  288. CombSort(aTest.GetData(), aTest.GetSize());
  289. #endif
  290. aTest.Copy(aCopy);
  291. DebugOut(aTest.GetData(), aTest.GetSize());
  292. CountSort(aTest.GetData(), aTest.GetSize(), UniqueHashTest, DS_BUBBLE_SORT_DATA_RANGE);
  293. DebugOut(aTest.GetData(), aTest.GetSize());
  294. #if 0
  295. aTest.Copy(aCopy);
  296. CountSortSimp(aTest.GetData(), aTest.GetSize(), DS_BUBBLE_SORT_DATA_RANGE);
  297. #endif
  298. aTest.Copy(aCopy);
  299. DebugOut(aTest.GetData(), aTest.GetSize());
  300. QuickSort(aTest.GetData(), aTest.GetSize());
  301. DebugOut(aTest.GetData(), aTest.GetSize());
  302. aTest.Copy(aCopy);
  303. DebugOut(aTest.GetData(), aTest.GetSize());
  304. BitSort(aTest.GetData(), aTest.GetSize(), UniqueHashTest, DS_BUBBLE_SORT_DATA_RANGE);
  305. DebugOut(aTest.GetData(), aTest.GetSize());
  306. #if 0
  307. aTest.Copy(aCopy);
  308. BitSortSimp(aTest.GetData(), aTest.GetSize(), DS_BUBBLE_SORT_DATA_RANGE);
  309. #endif
  310. }
  311. int main(void)
  312. {
  313. TestEmsSort();
  314. return 0;
  315. }
复制代码
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
发表于 2009-10-30 14:01:04 | 显示全部楼层
vc6本身就带qsort
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
发表于 2009-10-30 14:15:39 | 显示全部楼层
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
发表于 2009-10-30 14:24:46 | 显示全部楼层
还用到了goto
  1. /***
  2. *qsort.c - quicksort algorithm; qsort() library function for sorting arrays
  3. *
  4. * Copyright (c) 1985-1997, Microsoft Corporation. All rights reserved.
  5. *
  6. *Purpose:
  7. * To implement the qsort() routine for sorting arrays.
  8. *
  9. *******************************************************************************/
  10. #include <cruntime.h>
  11. #include <stdlib.h>
  12. #include <search.h>
  13. /* prototypes for local routines */
  14. static void __cdecl shortsort(char *lo, char *hi, unsigned width,
  15. int (__cdecl *comp)(const void *, const void *));
  16. static void __cdecl swap(char *p, char *q, unsigned int width);
  17. /* this parameter defines the cutoff between using quick sort and
  18. insertion sort for arrays; arrays with lengths shorter or equal to the
  19. below value use insertion sort */
  20. #define CUTOFF 8 /* testing shows that this is good value */
  21. /***
  22. *qsort(base, num, wid, comp) - quicksort function for sorting arrays
  23. *
  24. *Purpose:
  25. * quicksort the array of elements
  26. * side effects: sorts in place
  27. *
  28. *Entry:
  29. * char *base = pointer to base of array
  30. * unsigned num = number of elements in the array
  31. * unsigned width = width in bytes of each array element
  32. * int (*comp)() = pointer to function returning analog of strcmp for
  33. * strings, but supplied by user for comparing the array elements.
  34. * it accepts 2 pointers to elements and returns neg if 1<2, 0 if
  35. * 1=2, pos if 1>2.
  36. *
  37. *Exit:
  38. * returns void
  39. *
  40. *Exceptions:
  41. *
  42. *******************************************************************************/
  43. /* sort the array between lo and hi (inclusive) */
  44. void __cdecl qsort (
  45. void *base,
  46. unsigned num,
  47. unsigned width,
  48. int (__cdecl *comp)(const void *, const void *)
  49. )
  50. {
  51. char *lo, *hi; /* ends of sub-array currently sorting */
  52. char *mid; /* points to middle of subarray */
  53. char *loguy, *higuy; /* traveling pointers for partition step */
  54. unsigned size; /* size of the sub-array */
  55. char *lostk[30], *histk[30];
  56. int stkptr; /* stack for saving sub-array to be processed */
  57. /* Note: the number of stack entries required is no more than
  58. 1 + log2(size), so 30 is sufficient for any array */
  59. if (num < 2 || width == 0)
  60. return; /* nothing to do */
  61. stkptr = 0; /* initialize stack */
  62. lo = base;
  63. hi = (char *)base + width * (num-1); /* initialize limits */
  64. /* this entry point is for pseudo-recursion calling: setting
  65. lo and hi and jumping to here is like recursion, but stkptr is
  66. prserved, locals aren't, so we preserve stuff on the stack */
  67. recurse:
  68. size = (hi - lo) / width + 1; /* number of el's to sort */
  69. /* below a certain size, it is faster to use a O(n^2) sorting method */
  70. if (size <= CUTOFF) {
  71. shortsort(lo, hi, width, comp);
  72. }
  73. else {
  74. /* First we pick a partititioning element. The efficiency of the
  75. algorithm demands that we find one that is approximately the
  76. median of the values, but also that we select one fast. Using
  77. the first one produces bad performace if the array is already
  78. sorted, so we use the middle one, which would require a very
  79. wierdly arranged array for worst case performance. Testing shows
  80. that a median-of-three algorithm does not, in general, increase
  81. performance. */
  82. mid = lo + (size / 2) * width; /* find middle element */
  83. swap(mid, lo, width); /* swap it to beginning of array */
  84. /* We now wish to partition the array into three pieces, one
  85. consisiting of elements <= partition element, one of elements
  86. equal to the parition element, and one of element >= to it. This
  87. is done below; comments indicate conditions established at every
  88. step. */
  89. loguy = lo;
  90. higuy = hi + width;
  91. /* Note that higuy decreases and loguy increases on every iteration,
  92. so loop must terminate. */
  93. for (;;) {
  94. /* lo <= loguy < hi, lo < higuy <= hi + 1,
  95. A[i] <= A[lo] for lo <= i <= loguy,
  96. A[i] >= A[lo] for higuy <= i <= hi */
  97. do {
  98. loguy += width;
  99. } while (loguy <= hi && comp(loguy, lo) <= 0);
  100. /* lo < loguy <= hi+1, A[i] <= A[lo] for lo <= i < loguy,
  101. either loguy > hi or A[loguy] > A[lo] */
  102. do {
  103. higuy -= width;
  104. } while (higuy > lo && comp(higuy, lo) >= 0);
  105. /* lo-1 <= higuy <= hi, A[i] >= A[lo] for higuy < i <= hi,
  106. either higuy <= lo or A[higuy] < A[lo] */
  107. if (higuy < loguy)
  108. break;
  109. /* if loguy > hi or higuy <= lo, then we would have exited, so
  110. A[loguy] > A[lo], A[higuy] < A[lo],
  111. loguy < hi, highy > lo */
  112. swap(loguy, higuy, width);
  113. /* A[loguy] < A[lo], A[higuy] > A[lo]; so condition at top
  114. of loop is re-established */
  115. }
  116. /* A[i] >= A[lo] for higuy < i <= hi,
  117. A[i] <= A[lo] for lo <= i < loguy,
  118. higuy < loguy, lo <= higuy <= hi
  119. implying:
  120. A[i] >= A[lo] for loguy <= i <= hi,
  121. A[i] <= A[lo] for lo <= i <= higuy,
  122. A[i] = A[lo] for higuy < i < loguy */
  123. swap(lo, higuy, width); /* put partition element in place */
  124. /* OK, now we have the following:
  125. A[i] >= A[higuy] for loguy <= i <= hi,
  126. A[i] <= A[higuy] for lo <= i < higuy
  127. A[i] = A[lo] for higuy <= i < loguy */
  128. /* We've finished the partition, now we want to sort the subarrays
  129. [lo, higuy-1] and [loguy, hi].
  130. We do the smaller one first to minimize stack usage.
  131. We only sort arrays of length 2 or more.*/
  132. if ( higuy - 1 - lo >= hi - loguy ) {
  133. if (lo + width < higuy) {
  134. lostk[stkptr] = lo;
  135. histk[stkptr] = higuy - width;
  136. ++stkptr;
  137. } /* save big recursion for later */
  138. if (loguy < hi) {
  139. lo = loguy;
  140. goto recurse; /* do small recursion */
  141. }
  142. }
  143. else {
  144. if (loguy < hi) {
  145. lostk[stkptr] = loguy;
  146. histk[stkptr] = hi;
  147. ++stkptr; /* save big recursion for later */
  148. }
  149. if (lo + width < higuy) {
  150. hi = higuy - width;
  151. goto recurse; /* do small recursion */
  152. }
  153. }
  154. }
  155. /* We have sorted the array, except for any pending sorts on the stack.
  156. Check if there are any, and do them. */
  157. --stkptr;
  158. if (stkptr >= 0) {
  159. lo = lostk[stkptr];
  160. hi = histk[stkptr];
  161. goto recurse; /* pop subarray from stack */
  162. }
  163. else
  164. return; /* all subarrays done */
  165. }
  166. /***
  167. *shortsort(hi, lo, width, comp) - insertion sort for sorting short arrays
  168. *
  169. *Purpose:
  170. * sorts the sub-array of elements between lo and hi (inclusive)
  171. * side effects: sorts in place
  172. * assumes that lo < hi
  173. *
  174. *Entry:
  175. * char *lo = pointer to low element to sort
  176. * char *hi = pointer to high element to sort
  177. * unsigned width = width in bytes of each array element
  178. * int (*comp)() = pointer to function returning analog of strcmp for
  179. * strings, but supplied by user for comparing the array elements.
  180. * it accepts 2 pointers to elements and returns neg if 1<2, 0 if
  181. * 1=2, pos if 1>2.
  182. *
  183. *Exit:
  184. * returns void
  185. *
  186. *Exceptions:
  187. *
  188. *******************************************************************************/
  189. static void __cdecl shortsort (
  190. char *lo,
  191. char *hi,
  192. unsigned width,
  193. int (__cdecl *comp)(const void *, const void *)
  194. )
  195. {
  196. char *p, *max;
  197. /* Note: in assertions below, i and j are alway inside original bound of
  198. array to sort. */
  199. while (hi > lo) {
  200. /* A[i] <= A[j] for i <= j, j > hi */
  201. max = lo;
  202. for (p = lo+width; p <= hi; p += width) {
  203. /* A[i] <= A[max] for lo <= i < p */
  204. if (comp(p, max) > 0) {
  205. max = p;
  206. }
  207. /* A[i] <= A[max] for lo <= i <= p */
  208. }
  209. /* A[i] <= A[max] for lo <= i <= hi */
  210. swap(max, hi, width);
  211. /* A[i] <= A[hi] for i <= hi, so A[i] <= A[j] for i <= j, j >= hi */
  212. hi -= width;
  213. /* A[i] <= A[j] for i <= j, j > hi, loop top condition established */
  214. }
  215. /* A[i] <= A[j] for i <= j, j > lo, which implies A[i] <= A[j] for i < j,
  216. so array is sorted */
  217. }
  218. /***
  219. *swap(a, b, width) - swap two elements
  220. *
  221. *Purpose:
  222. * swaps the two array elements of size width
  223. *
  224. *Entry:
  225. * char *a, *b = pointer to two elements to swap
  226. * unsigned width = width in bytes of each array element
  227. *
  228. *Exit:
  229. * returns void
  230. *
  231. *Exceptions:
  232. *
  233. *******************************************************************************/
  234. static void __cdecl swap (
  235. char *a,
  236. char *b,
  237. unsigned width
  238. )
  239. {
  240. char tmp;
  241. if ( a != b )
  242. /* Do the swap one character at a time to avoid potential alignment
  243. problems. */
  244. while ( width-- ) {
  245. tmp = *a;
  246. *a++ = *b;
  247. *b++ = tmp;
  248. }
  249. }
复制代码
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
 楼主| 发表于 2009-10-30 15:07:17 | 显示全部楼层
微软的这个源代码是在哪个目录下面的,我好像没有找到。谢谢!
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
 楼主| 发表于 2009-10-30 15:18:50 | 显示全部楼层
在vc9.0的版本下找到了,感谢!哎,好久忘记了微软crt下面的源代码了!
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
 楼主| 发表于 2009-10-30 16:11:40 | 显示全部楼层
本帖最后由 silitex 于 2009-10-30 16:42 编辑 居然出现这样的例子,本来想把维基百科上面所有的排序算法熟悉了以后再来看周建钦的超快速排序算法,但看到计数排序和基数排序的时候突然想到了这个位排序算法。后来再回过头来看周建钦的超快速排序算法的时候,发现原来排序思想是一样的!但显然:周建钦的超快排序算法并没有这个的BitSort的排序算法快,但经过大量数据的验证(这个时间的验证是以CPU的计数频率进行的,可以精确到纳秒),当n约等于m的时候,其速度并比不上快速排序!当n>=200000的时候,其运行时间几乎相当。如果 n远远大于m的时候,还不如用计数排序来得快!并且显然周在表达时间复杂度和空间复杂度的时候是错误的。
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
 楼主| 发表于 2009-10-31 08:46:24 | 显示全部楼层
本帖最后由 silitex 于 2009-10-31 10:12 编辑 微软的非递归的快速排序居然在最坏的情况下也知道O(logn)的空间复杂度,确实很让人惊讶,为了研究这样的快速排序,我把微软的非递归的快速排序翻译成了和我写的快速排序一样的代码,但发现其运行时间居然还没有递归版本的快(经过了大量的测试),究其原因,应该是微软在进行交换的时候采用了殷人昆一书类似的方案,但其实严蔚敏一书的方案可以得到最节约的时间。如下是一个n = 100000, m = n的测试时间的结果(其中第一列是一个精确的时间计数): | 398067668| 3966754| 768735| Comb Sort - 比较次数:3966754 交换次数:768735 空间复杂度:O(1) | 139068285| 2106347| 262648| 快速排序 - 比较次数:2106347 准交换次数:262648.1(交换次数+移动次数/3.移动次数%3) 空间复杂度:O(n)(最坏情况), O(logn)(平均情况) | 163932502| 2102328| 397655| 快速排序(微软) - 比较次数:2102328 交换次数:397655 空间复杂度:O(logn) | 167993880| 2106347| 301311| 快速排序(严蔚敏) - 比较次数:2106347 准交换次数:301311.1(交换次数+移动次数/3.移动次数%3) 空间复杂度:O(n)(最坏情况), O(logn)(平均情况) | 380433367| 2120692| 1158540| 快速排序(殷人昆) - 比较次数:2120692 准交换次数:1158540.0(交换次数+移动次数/3.移动次数%3) 空间复杂度:O(n)(最坏情况), O(logn)(平均情况) | 188575853| 299982| 99990| 计数排序 - 比较次数:299982 交换次数:99990 空间复杂度:O(m)(m为排序数据的取值可能数目) | 195871200| 299982| 99990| 简化的计数排序 - 比较次数:299982 交换次数:99990 空间复杂度:O(m)(m为排序数据的取值可能数目) | 320452718| 1341610| 447203| 希尔排序 - 比较次数:1341610 准交换次数:447203.1(交换次数+移动次数/3.移动次数%3) 空间复杂度:O(1) | 593682330| 1466030| 1251155| 归并排序 - 比较次数:1466030 交换次数:1251155 空间复杂度:O(n) | 601350990| 1466030| 987002| 普通归并排序 - 比较次数:1466030 交换次数:987002 空间复杂度:O(n) | 177502995| 1679679| 380504| 位排序 - 比较次数:1679679 交换次数:380504 空间复杂度:O(logn) | 154381163| 1679679| 380504| 简化的位排序 - 比较次数:1679679 交换次数:380504 空间复杂度:O(logn)
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
 楼主| 发表于 2009-10-31 09:07:44 | 显示全部楼层
附上测试用的严蔚敏、殷人昆、以及修改风格后的微软版本的三个版本的快速排序算法:
  1. // 严蔚敏书上的算法
  2. // 这种算法移动次数比殷人昆的少,但比较次数又增加了
  3. static int QuickSortPartition2(ELEM_TYPE *pData, int nLow, int nHigh)
  4. {
  5. ELEM_TYPE eTemp;
  6. DsCopyElem(&eTemp, &pData[nLow]);
  7. while (nLow < nHigh)
  8. {
  9. while (nLow < nHigh && pData[nHigh].eKey >= eTemp.eKey)
  10. nHigh--;
  11. DsCopyElem(&pData[nLow], &pData[nHigh]);
  12. while (nLow < nHigh && pData[nLow].eKey <= eTemp.eKey)
  13. nLow++;
  14. DsCopyElem(&pData[nHigh], &pData[nLow]);
  15. }
  16. DsCopyElem(&pData[nLow], &eTemp);
  17. return nLow;
  18. }
  19. static void QuickSortRecursive2(ELEM_TYPE *pData, int nLow, int nHigh)
  20. {
  21. int nPrivot;
  22. if (nLow < nHigh)
  23. {
  24. nPrivot = QuickSortPartition2(pData, nLow, nHigh);
  25. QuickSortRecursive2(pData, nLow, nPrivot-1);
  26. QuickSortRecursive2(pData, nPrivot+1, nHigh);
  27. }
  28. }
  29. // 快速排序
  30. void QuickSort2(ELEM_TYPE *pData, int nLen)
  31. {
  32. QuickSortRecursive2(pData, 0, nLen-1);
  33. }
  34. // 殷人昆书上的算法, 小地方进行了改进
  35. // 这种算法移动次数比严蔚敏的多,虽然比较次数减少了
  36. static int QuickSortPartition3(ELEM_TYPE *pData, int nLow, int nHigh)
  37. {
  38. int nPrivot = nLow;
  39. int i;
  40. for (i = nLow+1; i <= nHigh; i++)
  41. {
  42. if (pData[i].eKey < pData[nLow].eKey)
  43. {
  44. nPrivot++;
  45. if (nPrivot != i)
  46. DsSwapElem(&pData[nPrivot], &pData[i]);
  47. }
  48. }
  49. DsSwapElem(&pData[nLow], &pData[nPrivot]);
  50. return nPrivot;
  51. }
  52. template <class TYPE>
  53. static void QuickSortRecursive3(TYPE *pData, int nLow, int nHigh)
  54. {
  55. int nPrivot;
  56. if (nLow < nHigh)
  57. {
  58. nPrivot = QuickSortPartition3(pData, nLow, nHigh);
  59. QuickSortRecursive3(pData, nLow, nPrivot-1);
  60. QuickSortRecursive3(pData, nPrivot+1, nHigh);
  61. }
  62. }
  63. // 快速排序
  64. void QuickSort3(ELEM_TYPE *pData, int nLen)
  65. {
  66. QuickSortRecursive3(pData, 0, nLen-1);
  67. }
  68. #define CUTOFF 8 /* testing shows that this is good value */
  69. static void shortsort (ELEM_TYPE *pData, int lo, int hi)
  70. {
  71. int i;
  72. int max;
  73. while (hi > lo) {
  74. max = lo;
  75. for (i = lo+1; i <= hi; i++) {
  76. if (pData[i].eKey > pData[max].eKey) {
  77. max = i;
  78. }
  79. }
  80. if (max != hi) {
  81. DsSwapElem(&pData[max], &pData[hi]);
  82. }
  83. hi--;
  84. }
  85. }
  86. // 对微软写的快速排序的翻译,这样便于调试,也便于理解微软所写的快速排序的思想
  87. void QuickSortMs(ELEM_TYPE *pData, int nLen)
  88. {
  89. #define STKSIZ (8*sizeof(nLen)-2)
  90. int lo, hi; /* ends of sub-array currently sorting */
  91. int mid; /* points to middle of subarray */
  92. int loguy, higuy; /* traveling pointers for partition step */
  93. int size; /* size of the sub-array */
  94. int lostk[STKSIZ], histk[STKSIZ];
  95. int stkptr; /* stack for saving sub-array to be processed */
  96. if (nLen < 2)
  97. return; /* nothing to do */
  98. stkptr = 0; /* initialize stack */
  99. lo = 0;
  100. hi = nLen-1; /* initialize limits */
  101. /* this entry point is for pseudo-recursion calling: setting
  102. lo and hi and jumping to here is like recursion, but stkptr is
  103. prserved, locals aren't, so we preserve stuff on the stack */
  104. recurse:
  105. size = hi - lo + 1; /* number of el's to sort */
  106. /* below a certain size, it is faster to use a O(n^2) sorting method */
  107. if (size <= CUTOFF) {
  108. shortsort(pData, lo, hi);
  109. }
  110. else {
  111. /* First we pick a partititioning element. The efficiency of the
  112. algorithm demands that we find one that is approximately the
  113. median of the values, but also that we select one fast. Using
  114. the first one produces bad performace if the array is already
  115. sorted, so we use the middle one, which would require a very
  116. wierdly arranged array for worst case performance. Testing shows
  117. that a median-of-three algorithm does not, in general, increase
  118. performance. */
  119. mid = lo + size / 2;
  120. DsSwapElem(&pData[mid], &pData[lo]);
  121. /* We now wish to partition the array into three pieces, one
  122. consisiting of elements <= partition element, one of elements
  123. equal to the parition element, and one of element >= to it. This
  124. is done below; comments indicate conditions established at every
  125. step. */
  126. loguy = lo;
  127. higuy = hi + 1;
  128. /* Note that higuy decreases and loguy increases on every iteration,
  129. so loop must terminate. */
  130. for (;;) {
  131. /* lo <= loguy < hi, lo < higuy <= hi + 1,
  132. A[i] <= A[lo] for lo <= i <= loguy,
  133. A[i] >= A[lo] for higuy <= i <= hi */
  134. do {
  135. loguy++;
  136. } while (loguy <= hi && pData[loguy].eKey <= pData[lo].eKey);
  137. /* lo < loguy <= hi+1, A[i] <= A[lo] for lo <= i < loguy,
  138. either loguy > hi or A[loguy] > A[lo] */
  139. do {
  140. higuy--;
  141. } while (higuy > lo && pData[higuy].eKey >= pData[lo].eKey);
  142. /* lo-1 <= higuy <= hi, A[i] >= A[lo] for higuy < i <= hi,
  143. either higuy <= lo or A[higuy] < A[lo] */
  144. if (higuy < loguy)
  145. break;
  146. /* if loguy > hi or higuy <= lo, then we would have exited, so
  147. A[loguy] > A[lo], A[higuy] < A[lo],
  148. loguy < hi, highy > lo */
  149. DsSwapElem(&pData[loguy], &pData[higuy]);
  150. /* A[loguy] < A[lo], A[higuy] > A[lo]; so condition at top
  151. of loop is re-established */
  152. }
  153. /* A[i] >= A[lo] for higuy < i <= hi,
  154. A[i] <= A[lo] for lo <= i < loguy,
  155. higuy < loguy, lo <= higuy <= hi
  156. implying:
  157. A[i] >= A[lo] for loguy <= i <= hi,
  158. A[i] <= A[lo] for lo <= i <= higuy,
  159. A[i] = A[lo] for higuy < i < loguy */
  160. DsSwapElem(&pData[lo], &pData[higuy]); /* put partition element in place */
  161. /* OK, now we have the following:
  162. A[i] >= A[higuy] for loguy <= i <= hi,
  163. A[i] <= A[higuy] for lo <= i < higuy
  164. A[i] = A[lo] for higuy <= i < loguy */
  165. /* We've finished the partition, now we want to sort the subarrays
  166. [lo, higuy-1] and [loguy, hi].
  167. We do the smaller one first to minimize stack usage.
  168. We only sort arrays of length 2 or more.*/
  169. if ( higuy - 1 - lo >= hi - loguy ) {
  170. if (lo + 1 < higuy) {
  171. lostk[stkptr] = lo;
  172. histk[stkptr] = higuy - 1;
  173. ++stkptr;
  174. } /* save big recursion for later */
  175. if (loguy < hi) {
  176. lo = loguy;
  177. goto recurse; /* do small recursion */
  178. }
  179. }
  180. else {
  181. if (loguy < hi) {
  182. lostk[stkptr] = loguy;
  183. histk[stkptr] = hi;
  184. ++stkptr; /* save big recursion for later */
  185. }
  186. if (lo + 1 < higuy) {
  187. hi = higuy - 1;
  188. goto recurse; /* do small recursion */
  189. }
  190. }
  191. }
  192. /* We have sorted the array, except for any pending sorts on the stack.
  193. Check if there are any, and do them. */
  194. --stkptr;
  195. if (stkptr >= 0) {
  196. lo = lostk[stkptr];
  197. hi = histk[stkptr];
  198. goto recurse; /* pop subarray from stack */
  199. }
  200. else {
  201. return; /* all subarrays done */
  202. }
  203. }
复制代码
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
发表于 2009-11-3 15:02:15 | 显示全部楼层
本帖最后由 liangbch 于 2009-11-3 15:06 编辑 关于位排序,前人已有研究。请看 http://bbs.emath.ac.cn/viewthrea ... &fromuid=25#pid9829。 快速扫了你代码中的注释,感觉你的这个实现不太好,需要分配额外的空间。 《无符号整数按位快速排序算法》基本思想是: 1. 任何一个整数都是m位2进制数,数n的各个比特从高到低依次为$B_m$,$B_{m-1}$, ......$B_1$,$B_0$. 2. 对于一个序列,第一个元素标为being, 最末一个元素的下标end, 则有如下算法 //将数组x[begin]到X[end]的的所有元素按照第k比特排序 void sqortByBit( x, begin, end, k) { i=begin, j=end; while (i bitx(X [ j],k) //元素想x[ i ]的第k比特大于X[j]的第k比特 X [i ] <--> X[ j ] //交换x[ i ]与x[j] i++; j--; } //交换的结果是序列中前半部分的元素第k比特是0,序列中后半部分的元素第k比特是1 mid=i-1; if (k>1) { sqortByBit( x, begin,mid, k-1); //对前半个子序列按照第k-1比特排序 sqortByBit( x, mid+1,end, k-1); //对后半个子序列按照第k-1比特排序 } } 这是一个递归的算法,和快速排序非常类似。和快速排序不同的是, 1. 他每次比较时,只比较第k比特。 2. 对内存的访问是顺序访问,而快速排序则是在一定范围内的随机访问。所以对cache比较敏感的机器,这种排序更有优势。 时间复杂度分析:快速排序的时间复杂度为n*log(n), 而按位排序的时间复杂度为n*log(m),m为最大关键字。 所有若m<=n,则这个排序算法快于快速排序。 改进,如果关键字是均匀分布的。可以综合位排序和快速排序,使其速度更快。 改进方法,当m>n, 当序列长度小于某一阀值,调用快速排序算法。
毋因群疑而阻独见  毋任己意而废人言
毋私小惠而伤大体  毋借公论以快私情
您需要登录后才可以回帖 登录 | 欢迎注册

本版积分规则

小黑屋|手机版|数学研发网 ( 苏ICP备07505100号 )

GMT+8, 2024-11-22 12:58 , Processed in 0.030516 second(s), 16 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表