
In my own tests cubesort is 2 times slower than mergesort for random integers and 2.5 times faster for sorted integers. This is using the latest version which I uploaded today and improves performance by about 25%.
I'm not sure if this gap can be closed as mergesort has superior cache performance for random data. I haven't been able to find a decent quicksort implementation.
Cubesort seems best suited for cases where a data set is for more than 50% in order.
When I have a couple of hours I'll make a string based version of cubesort (very easy) and see how fast it sorts the file.
Does the file contain duplicates?
Edit:
It appears the file is in reverse order. Takes about 3.5 seconds to load the file, 5 seconds to sort it using cubesort.
modified 29Jun14 19:08pm.





Having benchmarked Cubesort with KnightTours my amateurish evaluation is of two words: Mutsi! Mutsunka!
My Sandokan (r3fix) is inferior compared to Cubesort, 10,000,000 lines (128 bytes each) were sorted and after that sorted file was sorted once more:

 sorter \ filedataset  KT10000000.txt  KT10000000_SORTED.txt 

 Cubesort  080 seconds  045 seconds 
 Windows' sort.exe  118 seconds  115 seconds 
 Sandokan  131 seconds; swappings: 36,826,242+  114 seconds; swappings: 0 

The benchmark is downloadable at: www.sanmayce.com/Downloads/TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit.zip[^]
The full log follows:
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>dir
Volume in drive E is SSD_Sanmayce
Volume Serial Number is 9CF6FEA3
Directory of E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit
06/29/2014 11:45 PM <DIR> .
06/29/2014 11:45 PM <DIR> ..
06/29/2014 11:45 PM 12,566 cubesort1.0_nonoriginal.c
06/29/2014 11:45 PM 86,016 cubesort1.0_nonoriginal.exe
06/29/2014 11:45 PM 202,880 Cubesort.zip
06/29/2014 11:45 PM 24,490 Knighttour_r8dump.c
06/29/2014 11:45 PM 79,872 Knighttour_r8dump.exe
06/29/2014 11:45 PM 116 Make_EXEs.bat
06/29/2014 11:45 PM 1,604 MokujIN prompt.lnk
06/29/2014 11:45 PM 301,406 Sandokan_Logo.pdf
06/29/2014 11:45 PM 170,433 Sandokan_QuickSortExternal_4+GB_r3fix.c
06/29/2014 11:45 PM 122,368 Sandokan_QuickSortExternal_4+GB_r3fix.exe
06/29/2014 11:45 PM 6,144 timer64.exe
06/29/2014 11:45 PM 1,111 TriShowdown_Sandokan_vs_Windows_vs_Cubesort.bat
12 File(s) 1,009,006 bytes
2 Dir(s) 27,537,633,280 bytes free
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>move ..\KT10000000.txt
1 file(s) moved.
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>TriShowdown_Sandokan_vs_Windows_vs_Cubesort.bat
Sorting 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe cubesort1.0_nonoriginal.exe KT10000000.txt
Cubesort: sorted 10000000 elements in 66503 clocks.
Kernel Time = 6.754 = 8%
User Time = 54.756 = 68%
Process Time = 61.511 = 76% Virtual Memory = 2676 MB
Global Time = 80.355 = 100% Physical Memory = 2447 MB
Sorting 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe sort /M 1048576 /T D: KT10000000.txt /O WindowsSORT_ResultantFile.txt
Kernel Time = 1.466 = 1%
User Time = 88.249 = 74%
Process Time = 89.716 = 75% Virtual Memory = 1028 MB
Global Time = 118.248 = 100% Physical Memory = 1021 MB
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>fc Cubesort_ResultantFile.txt WindowsSORT_ResultantFile.txt /b
Comparing files Cubesort_ResultantFile.txt and WINDOWSSORT_RESULTANTFILE.TXT
FC: no differences encountered
Sorting 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe "Sandokan_QuickSortExternal_4+GB_r3fix.exe" KT10000000.txt /fast /ascend 512
Sandokan_QuickSortExternal_4+GB, revision 3fix, written by Kaze, using Bill Durango's Quicksort source.
Size of input file: 1,300,000,000
Counting lines ...
Lines encountered: 10,000,000
Longest line (including CR if present): 129
Allocated memory for pointerstolines in MB: 76
Assigning pointers ...
sizeof(int), sizeof(void*): 4, 8
Trying to allocate memory for the file itself in MB: 1239 ... OK! Get on with fast internal accesses.
Uploading ...
Sorting 10,000,000 Pointers ...
Quicksort (Insertionsort for small blocks) commenced ...
\ RightEnd: 000,001,500,225; NumberOfSplittings: 0,001,676,176; Done: 100% ...
NumberOfSwappings: 36,826,242
NumberOfComparisons: 255,105,484
The time to sort 10,000,000 items via Quicksort+Insertionsort was 106,704 clocks.
Dumping the sorted data ...
\ Done 100% ...
Dumped 10,000,000 lines.
OK! Incoming and resultant file's sizes match.
Dump time: 6,739 clocks.
Total time: 131,258 clocks.
Performance: 9,904 bytes/clock.
Done successfully.
Kernel Time = 5.397 = 4%
User Time = 116.173 = 88%
Process Time = 121.571 = 92% Virtual Memory = 1320 MB
Global Time = 131.461 = 100% Physical Memory = 1321 MB
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>fc QuickSortExternal_4+GB.txt WindowsSORT_ResultantFile.txt /b
Comparing files QuickSortExternal_4+GB.txt and WINDOWSSORT_RESULTANTFILE.TXT
FC: no differences encountered
Sorting sorted 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe cubesort1.0_nonoriginal.exe WindowsSORT_ResultantFile.txt
Cubesort: sorted 10000000 elements in 28064 clocks.
Kernel Time = 6.723 = 14%
User Time = 27.284 = 60%
Process Time = 34.008 = 74% Virtual Memory = 2661 MB
Global Time = 45.474 = 100% Physical Memory = 2560 MB
Sorting sorted 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe sort /M 1048576 /T D: WindowsSORT_ResultantFile.txt /O WindowsSORT_ResultantFile_.txt
Kernel Time = 1.170 = 1%
User Time = 76.237 = 66%
Process Time = 77.407 = 67% Virtual Memory = 1028 MB
Global Time = 115.081 = 100% Physical Memory = 1021 MB
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>fc Cubesort_ResultantFile.txt WindowsSORT_ResultantFile_.txt /b
Comparing files Cubesort_ResultantFile.txt and WINDOWSSORT_RESULTANTFILE_.TXT
FC: no differences encountered
Sorting sorted 10,000,000 KT...
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>timer64.exe "Sandokan_QuickSortExternal_4+GB_r3fix.exe" WindowsSORT_ResultantFile.txt /fast /ascend 512
Sandokan_QuickSortExternal_4+GB, revision 3fix, written by Kaze, using Bill Durango's Quicksort source.
Size of input file: 1,300,000,000
Counting lines ...
Lines encountered: 10,000,000
Longest line (including CR if present): 129
Allocated memory for pointerstolines in MB: 76
Assigning pointers ...
sizeof(int), sizeof(void*): 4, 8
Trying to allocate memory for the file itself in MB: 1239 ... OK! Get on with fast internal accesses.
Uploading ...
Sorting 10,000,000 Pointers ...
Quicksort (Insertionsort for small blocks) commenced ...
\ RightEnd: 000,010,000,000; NumberOfSplittings: 0,001,611,392; Done: 100% ...
NumberOfSwappings: 0
NumberOfComparisons: 214,016,797
The time to sort 10,000,000 items via Quicksort+Insertionsort was 90,278 clocks.
Dumping the sorted data ...
\ Done 100% ...
Dumped 10,000,000 lines.
OK! Incoming and resultant file's sizes match.
Dump time: 5,429 clocks.
Total time: 114,488 clocks.
Performance: 11,354 bytes/clock.
Done successfully.
Kernel Time = 6.364 = 5%
User Time = 99.372 = 86%
Process Time = 105.737 = 92% Virtual Memory = 1320 MB
Global Time = 114.691 = 100% Physical Memory = 1321 MB
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>fc QuickSortExternal_4+GB.txt WindowsSORT_ResultantFile_.txt /b
Comparing files QuickSortExternal_4+GB.txt and WINDOWSSORT_RESULTANTFILE_.TXT
FC: no differences encountered
E:\TriShowdown_Sandokan_vs_Windows_vs_Cubesort_Intel_64bit>
Cubesort is one lovely piece of software, Mr. Hoven I salute you with Beth Ditto's Heavy Cross[^]:
It's a cruel cruel world to face on your own
A heavy cross to carry alone
The lights are on but everyone's gone
And it's cruel
It's a funny way to make ends meet
When the lights are out on every street
It feels alright but never complete
without you
I chose you
If it's already have been done
Undo it
It takes two
It is up to me and you to proove it
On the rainy nights even the coldest days
you're moments ago but seconds away
The principle of Nature it's true but it's a cruel world
See we can play it safe or play it cool
Follow the leader or make up all the rules
Whatever you want the choice is yours
So choose me
I chose you
"If it's already have been done... undo it."  I wish Beth was into programming.
The most decent Quicksort I know is this:
https://code.google.com/p/libdivsufsort/[^]





libdivsufsort implements a suffix array, which according to some sources is about as fast as quicksort.
I made some improvements to the memory handling of Cubesort and it now is 1.21 times slower than mergesort for random data, and 5.75 times faster for sorted data.
Might see further improvement gains by setting BSC_L to 1 when comparing strings.





Cubesort is awesome, with big potential on top of that.
The thing that impresses me most is its construction speed, not to mention the 'free' dump.
Couldn't resist not to try my 'secret weapon'  a frozen/unfinished subproject called 'Tcheburaschkasort'.
It is entirely based on the titaniumESQUEly strong BayerMcCreight algorithm which I implemented specifically and only as order 3 (three children maximum).
It's been some time since I wrote it, enough to forget thisandthat, but the initial idea was to use next dimension i.e. to use millions of Btrees.
My unfinished Tcheburaschkasort currently uses 24bit pseudohash (in fact first 123 byte(s) ARE a slot) giving a forest of 16,777,216 BTrees of order 3:
UINT PseudoHash(const char *str, unsigned int wrdlen)
{
UINT hash32 = 0;
const char *p = str;
// Use 256based system for first 3 bytes i.e. left is more significant:
// Currently 3bytes long i.e. 24bit:
if (wrdlen>=1) {
hash32 = hash32  ((*p)<<16);
p += 1;
}
if (wrdlen>=2) {
hash32 = hash32  ((*p)<<8);
p += 1;
}
if (wrdlen>=3) {
hash32 = hash32  ((*p)<<0);
p += 1;
}
return hash32;
}
Each slot houses the root of a tree.
Not using any hash Tcheburaschkasort resembles a hypercube of order 0, using hash makes it 1D i.e. of order 1  lookup hashtable is the axis.
Blahblah, in short the randomness of those prefixes (1..3bytes long) decides the speed performance.
In example with KT10000000.txt all first 3 bytes are equal, 'A8C', so there is one only tree and performance is poor.
Because all the 10,000,000 lines start from board position A8.
Knighttours: each square represents a board position, as A1(bottomleft), H8(topright):

a8 b8 ...h8 

a7 ... 

...... 

a1 ......h1 

When making the data more diverse, e.g. reverse the order of tours then for KT10000000.txt we will have 50 trees, see below.
Let us see what boost comes from this diversity.
So, Tcheburaschkasort vs Cubesort results for KT10M filedataset (10,000,000 KnightTours from A8, reversed):
NOTE:
Tcheburaschkasort is based on my superfast multipurpose text ripper Leprechaun.
Tcheburaschkasort still DOES NOT make inorder dump/traversal, resultant file is unsorted, to be done. This DOES NOT affect results, though.
The full log is as follows:
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>dir
Volume in drive E is SSD_Sanmayce
Volume Serial Number is 9CF6FEA3
Directory of E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit
07/03/2014 04:19 AM <DIR> .
07/03/2014 04:19 AM <DIR> ..
07/03/2014 04:20 AM 823 BiShowdown_Cubesort_Tcheburaschkasort.bat
07/03/2014 04:20 AM 12,566 cubesort1.0_nonoriginal.c
07/03/2014 04:20 AM 86,016 cubesort1.0_nonoriginal.exe
07/03/2014 04:20 AM 202,880 Cubesort.zip
07/03/2014 04:20 AM 24,490 Knighttour_r8dump.c
07/03/2014 04:20 AM 79,872 Knighttour_r8dump.exe
07/03/2014 04:20 AM 25,071 Knighttour_r8dump_REVERSE.c
07/03/2014 04:20 AM 79,360 Knighttour_r8dump_REVERSE.exe
07/03/2014 04:20 AM 216 Make_EXEs.bat
07/03/2014 04:20 AM 1,604 MokujIN prompt.lnk
07/03/2014 04:20 AM 170,433 Sandokan_QuickSortExternal_4+GB_r3fix.c
07/03/2014 04:20 AM 122,368 Sandokan_QuickSortExternal_4+GB_r3fix.exe
07/03/2014 04:20 AM 327,819 TcheburaschkaSort.c
07/03/2014 04:20 AM 1,771,134 TcheburaschkaSort.cod
07/03/2014 04:20 AM 140,288 TcheburaschkaSort.exe
07/03/2014 04:20 AM 6,144 timer64.exe
16 File(s) 3,051,084 bytes
2 Dir(s) 29,523,390,464 bytes free
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>BiShowdown_Cubesort_Tcheburaschkasort.bat
Dumping 10,000,000 reversed KT...
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>Knighttour_r8dump_REVERSE A8 10000000 1>KT10000000_fromendtostart.txt
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>sort /M 1048576 /T D: /R KT10000000_fromendtostart.txt /O KT10000000_fromendtostartSORTEDINREVERSE.txt
Sorting 10,000,000 KT...
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>timer64.exe cubesort1.0_nonoriginal.exe KT10000000_fromendtostart.txt
Cubesort: sorted 10000000 elements in 94349 clocks.
Kernel Time = 8.283 = 7%
User Time = 59.592 = 55%
Process Time = 67.876 = 62% Virtual Memory = 2689 MB
Global Time = 108.217 = 100% Physical Memory = 2497 MB
Sorting 10,000,000 KT...
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>timer64.exe sort /M 1048576 /T D: KT10000000_fromendtostart.txt /O KT10000000_fromendtostartSORTED.txt
Kernel Time = 1.778 = 1%
User Time = 83.132 = 48%
Process Time = 84.911 = 49% Virtual Memory = 1028 MB
Global Time = 173.113 = 100% Physical Memory = 1021 MB
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>fc Cubesort_ResultantFile.txt KT10000000_fromendtostartSORTED.txt /b
Comparing files Cubesort_ResultantFile.txt and KT10000000_FROMENDTOSTARTSORTED.TXT
FC: no differences encountered
Sorting 10,000,000 KT...
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>dir KT10000000_fromendtostart.txt/b 1>TcheburaschkaSort.lst
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>timer64.exe TcheburaschkaSort TcheburaschkaSort.lst TcheburaschkaSort.txt 2900123 y
Leprechaun_singleton (FastInFuture Greedy ngramRipper), rev. 16FIXFIX, written by Svalqyatchx.
Purpose: Rips all distinct 1grams (1word phrases) with length 1..128 chars from incoming texts.
Feature1: All words within xlets/ngrams are in range 1..31 chars inclusive.
Feature2: In this revision 128MB 1way hash is used which results in 16,777,216 external BTrees of order 3.
Feature3: In this revision, 1 pass is to be made.
Feature4: If the external memory has latency 99+microseconds then !(look no further), IOPS(seektime) rules.
Pass #1 of 1:
Size of input file with files for Leprechauning: 34
Allocating HASH memory 134,217,793 bytes ... OK
Allocating memory 2833MB ... OK
Size of Input TEXTual file: 1,300,000,000
\; 00,222,222P/s; Phrase count: 10,000,000 of them 10,000,000 distinct; Done: 64/64
Bytes per second performance: 28,888,888B/s
Phrases per second performance: 222,222P/s
Time for putting phrases into trees: 45 second(s)
Flushing UNsorted phrases: 100%; Shaking trees performance: 00,952,380P/s
Time for shaking phrases from trees: 21 second(s)
Leprechaun: Current pass done.
Total memory needed for one pass: 2,133,638KB
Total distinct phrases: 10,000,000
Total time: 67 second(s)
Total performance: 149,253P/s i.e. phrases per second
Leprechaun: Done.
Kernel Time = 6.193 = 9%
User Time = 59.732 = 89%
Process Time = 65.926 = 99% Virtual Memory = 2967 MB
Global Time = 66.580 = 100% Physical Memory = 2218 MB
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>type Leprechaun.LOG
Leprechaun report:
Number Of Hash Collisions(Distinct WORDs  Number Of Trees): 9,999,950
Number Of Trees(GREATER THE BETTER): 50
Number Of LEAFs(littler THE BETTER) not counting ROOT LEAFs: 7,533,898
Highest Tree not counting ROOT Level i.e. CORONA levels(littler THE BETTER): 15
Used value for third parameter in KB: 2,900,123
Use next time as third parameter: 2,133,638
Total Attempts to Find/Put WORDs into Btrees order 3: 164,504,421
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>type KT10000000_fromendtostart.txtmore
E3D1C3E4F6D5B6A4C5E6F4D3B2C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
F6E4C3D1E3D5B6A4C5E6F4D3B2C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
E3D1C3A4B6D5F6E4C5E6F4D3B2C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
B6A4C3D1E3D5F6E4C5E6F4D3B2C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
E3D1C3E4F6D5F4E6C5D3B2A4B6C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
F6E4C3D1E3D5F4E6C5D3B2A4B6C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
E3D1C3D5F6E4C5E6F4D3B2A4B6C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
C3D1E3D5F6E4C5E6F4D3B2A4B6C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
F6D5E3D1C3E4C5E6F4D3B2A4B6C4E5F3D4F5D6B5A3B1D2F1H2G4H6G8E7C8A7C6D8B7A5B3A1C2E1G2H4G6H8F7G5H7F8D7B8A6B4A2C1E2G1H3F2H1G3H5G7E8C7A8
...
Oh, and for those who are interested here are the stats (memory requirement and number of attempts i.e. depthness of the Btree):
Simply you need 15 (in worst case) attempts in order to find a key within 10,000,000 keys.
Nothing impressive, 15 tries, yet with no unpleasant surprises like unbalancing.
Also for data 1,240MB (the 130x10,000,000) the tree is 2,133,638KB=2,083MB in size.
There are three reasons for Tcheburaschkasort's poor performance, first is the slow dumping of sorted data:
66 seconds are divided in:
Sorting: Time for putting phrases into trees: 45 second(s)
Dumping: Time for shaking phrases from trees: 21 second(s)
Second is the way I insert new keys, for speed boost (when incoming data is not of unique keys as here) fastsearch is performed and if failed new searchinsert is enforced, grmbl.
In simple words, insertion is slow, especially for order 3, doublegrmbl.
Third is the parsing of keys, in Cubesort I wrote it synthetically:
fread(&z_array[cnt].key, sizeof(keyType), 1, ifp);
While in Tcheburaschkasort the parsing is bytebybyte, as in realworld rippers:
if ( workbyte < '0' ) // Most characters are under alphabet  only one if // Cheburashka
{
ElStupido:
// This fragment is MIRRORed: #1 copy [
if (workbyte == 10) {NumberOfLines++;}
...
// Cheburashka [
else if( workbyte <= '9' )
{
//if( wrdlen < 31 )
if( wrdlen < 128 ) // Cheburashka
//if( wrdlen < LongestLineInclusive )
{ wrd[ wrdlen ] = workbyte; }
wrdlen++;
}
else if( workbyte >= 'A' && workbyte <= 'Z' )
{
//if( wrdlen < 31 )
if( wrdlen < 128 ) // Cheburashka
//if( wrdlen < LongestLineInclusive )
{ wrd[ wrdlen ] = workbyte; }
wrdlen++;
}
// Cheburashka ]
else if( workbyte >= 'a' && workbyte <= 'z' )
{
//if( wrdlen < 31 )
if( wrdlen < 128 ) // Cheburashka
//if( wrdlen < LongestLineInclusive )
{ wrd[ wrdlen ] = workbyte; }
wrdlen++;
}
else
{
// This fragment is MIRRORed: #2 copy [
goto ElStupido;
// This fragment is MIRRORed: #2 copy ]
}
Scared! Well above fragment handles 'words' of alphanumerical type 1..128 bytes in length.
Having an old Samsung 470 SSD on my laptop (Core 2 T7500 2200MHz) I ran the scariest sort of all, with allexternal memory operations i.e. no physical no virtual used:
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>timer64.exe TcheburaschkaSort TcheburaschkaSort.lst TcheburaschkaSort.txt 2900123 z
Leprechaun_singleton (FastInFuture Greedy ngramRipper), rev. 16FIXFIX, written by Svalqyatchx.
Purpose: Rips all distinct 1grams (1word phrases) with length 1..128 chars from incoming texts.
Feature1: All words within xlets/ngrams are in range 1..31 chars inclusive.
Feature2: In this revision 128MB 1way hash is used which results in 16,777,216 external BTrees of order 3.
Feature3: In this revision, 1 pass is to be made.
Feature4: If the external memory has latency 99+microseconds then !(look no further), IOPS(seektime) rules.
Pass #1 of 1:
Size of input file with files for Leprechauning: 34
Allocating HASH memory 134,217,793 bytes ... OK
Allocating/ZEROing 2,969,725,966 bytes swap file ... OK
Size of Input TEXTual file: 1,300,000,000
\; 00,008,643P/s; Phrase count: 10,000,000 of them 10,000,000 distinct; Done: 64/64
Bytes per second performance: 1,123,595B/s
Phrases per second performance: 8,643P/s
Time for putting phrases into trees: 1157 second(s)
Flushing UNsorted phrases: 100%; Shaking trees performance: 00,056,179P/s
Time for shaking phrases from trees: 356 second(s)
Leprechaun: Current pass done.
Total memory needed for one pass: 2,133,638KB
Total distinct phrases: 10,000,000
Total time: 1528 second(s)
Total performance: 6,544P/s i.e. phrases per second
Leprechaun: Done.
Kernel Time = 1096.609 = 71%
User Time = 217.449 = 14%
Process Time = 1314.058 = 86% Virtual Memory = 130 MB
Global Time = 1527.601 = 100% Physical Memory = 131 MB
E:\BiShowdown_Cubesort_Tcheburaschkasort_Intel_64bit>
Only 1527/108=14 times slower than Cubesort, not bad in my eyes.
As for Tcheburaschka, few things remain to be done, inorder traversal dump and few more raw ideas, however I lost my momentum, now I am interested in textual decompression.
Ah, and the name, Чебурашка, it is a little supercalm soullet (English lacks double diminutives), in fact the golden Russian animatronics movie, love it.





How to reduce a quantified 2 Sat formula to a quantified horn formula?





Guys,
for 25 or so days I have been wrestling with the most simple approach in LZSS.
Seeing how demanding (criticizing) are some fellow members, here I would like to hear from all interested in this topic programmers what one good article about LZSS should feature.
So, please share here your point on what has to be covered what to be emphasized and other mustbes in the article.
Maybe I will write an article, the thing that stops me is the unappreciativeness. My way of doing things is all about enjoying the speeds of small etudes in C, that's why I ask preemptively:
Does CODEPROJECT need the article 'Fastest textual decompression in C'?.
Here you can see my etude heavily benchmarked against the best in the world:
http://www.sanmayce.com/Nakamichi/[^]





Sanmayce wrote: Does CODEPROJECT need the article 'Fastest textual decompression in C'?. If you think your article covers a subject better than existing articles, or is the first one to describe the problem, then go ahead. But before you start, please spend time reading A Guide To Writing Articles For Code Project[^] and the Posting Guidelines[^]. You should also look at some of the articles by people such as Pete O'Hanlon[^], Sacha Barber[^] etc, to see the sort of submission that is likely to get good reviews.





Thanks, I checked all the links you gave me, also searched for LZ, LZSS, compression in article names.
Strange, not a single article on LZ/LZSS, not to mention fast, let alone fastest.
You see Mr.MacCutchan, I want to share a standalone filetofile compressor written in C featuring fastest decompression on textual data. I don't see it as utility but rather as an illustration/sample how to decompress some needed textlike data at speeds 18%40% of memcpy().
The focus I want to be precisely on DECOMPRESSION, I don't want to be dragged into mumbojumbisms as "How do I compress? The LZSS is not explained. This is not an article. Poor description and similar complains."
Seeing some fellow members' articles on LZW and Huffman I see my approach in different light, I like benchmarks realworld ones, disassembly listings and comparisons with the best decompressors today.
That's why I ask for some preemptive feedback/directions.
Personally, it is annoying for me to want to share some code etude and instead of making some interesting place/topic for sharing ideas/benching with fellow coders to receive complains "THIS ARTICLE IS NOT GOOD", I am not a journalist to write articles, after all the CP logo says 'For those who code' not 'write articles'.
My point, new ideas/improvements are to be easily shareable, I think.





If you are going to write then write, rather than discussing the what and how in these forums. If your article does not meet the requirements of the site then you have to accept the feedback, and either improve or change it as necessary.





Sanmayce wrote: I don't want to be dragged into mumbojumbisms as "How do I compress?
it's unlikely that many people are going to be familiar with the LZSS algorithm. and people are going to ask you for the compression side of it. so, i'd suggest including a simpler compressor with a description of the algorithm. you have to at least get the readers up to the point where they can follow your discussion of the decompressor. otherwise, you're starting the story in the middle.





Mr.Losinger,
have you seen the Wikipedia's article on LZSS: http://en.wikipedia.org/wiki/LZSS[^]
Now check my amateurish attempt to shed light on LZSS: http://www.sanmayce.com/Nakamichi/[^]
LZSS is simple enough yet writing an article is not easy, my wish is to contribute to the decompression part with one of the fastest (in top 3) etudes, benchmarks, C/Assembly snippets, not to define/explain the algorithm.
For a start, I need modern machine to test on, sadly I own only Core2, that's a nasty break for me since I use DWORD fetching (sucks on Core2), XMM and even ZMM which is so interesting to be seen what can bring to the table.
My naive expectations were that some fellow coders will help me at least with its benchmarking, sadly I see no interested coders on this topic.
One of my wishes is to show an etude, Nakamichi 'Sanagi' to be exact, 512bit targeted, luckily Intel C optimizer v14 (which I used) supports them but still no computer in my reach can run the executable.
"Tianhe2 the world's fastest supercomputer according to the TOP500 list for June and November 2013 utilizes Xeon Phi accelerators based on Knights Corner."
/Wikipedia/
In my view such an benchmark is both important and interesting, thus the incoming Knights Corner/Landing processors will show the power of ZMMWORD moves.
"... Far more interesting is that Intel's Knights Landing not only will be an expansion card but also will serve as a standalone platform or processor ..."
/http://www.adminmagazine.com/Articles/ExploringtheXeonPhi/
Simply, the near future will bring monstrous bandwidths, not utilizing them is just ... lame. I see how some 30 years old algorithms are/have to be 'pimped' and pumped.
>it's unlikely that many people are going to be familiar with the LZSS algorithm.
Agreed, but it is with everything like that, isn't it!
>i'd suggest including a simpler compressor with a description of the algorithm.
Sure.
>you have to at least get the readers up to the point where they can follow your discussion of the decompressor. otherwise, you're starting the story in the middle.
Of course, I am not some villain wanting to obscure information, but as I said my focus is entirely on textual decompression speed boosts, they alone took a lot of my time to wrestle with.
And for those who want to see what one of the best in the world has done:
http://fastcompression.blogspot.com/p/lz4.html
Comparing an eventual etude with two of the fastest (LzTurbo being the other) is a must, I see no better way to evaluate its worth.





The formula that I'm using to create this is from the book "Mathematical tools in computer graphics with C# implementation", however it only had the formula and no code. You could find the same kind of formulations available online here[^], and skipback and forwards on the slides to see a bit more.
My Implementation looks like this:
Public Function PointOnBesselOverhauserCurve(ByVal p0 As System.Windows.Point, ByVal p1 As System.Windows.Point, ByVal p2 As System.Windows.Point, ByVal p3 As System.Windows.Point, ByVal t() As Double, ByVal u As Double) As System.Windows.Point
Dim result As New System.Windows.Point()
Dim ViXPlusHalf, VixMinusHalf, ViYPlusHalf, ViYMinusHalf, ViX, ViY As Double
ViXPlusHalf = (p2.X  p1.X) / (t(2)  t(1))
VixMinusHalf = (p1.X  p0.X) / (t(1)  t(0))
ViYPlusHalf = (p2.Y  p1.Y) / (t(2)  t(1))
ViYMinusHalf = (p1.Y  p0.Y) / (t(1)  t(0))
ViX = ((t(2)  t(1)) * VixMinusHalf + (t(1)  t(0)) * ViXPlusHalf) / (t(2)  t(0))
ViY = ((t(2)  t(1)) * ViYMinusHalf + (t(1)  t(0)) * ViYPlusHalf) / (t(2)  t(0))
Dim PointList As New PointCollection
PointList.Add(p1)
PointList.Add(New Point(p1.X + (1 / 3) * (t(2)  t(1)) * ViX, p1.Y + (1 / 3) * (t(2)  t(1)) * ViY))
ViXPlusHalf = (p3.X  p2.X) / (t(3)  t(2))
VixMinusHalf = (p2.X  p1.X) / (t(2)  t(1))
ViYPlusHalf = (p3.Y  p2.Y) / (t(3)  t(2))
ViYMinusHalf = (p2.Y  p1.Y) / (t(2)  t(1))
ViX = ((t(3)  t(2)) * VixMinusHalf + (t(2)  t(1)) * ViXPlusHalf) / (t(3)  t(1))
ViY = ((t(3)  t(2)) * ViYMinusHalf + (t(2)  t(1)) * ViYPlusHalf) / (t(3)  t(1))
PointList.Add(New Point(p2.X  (1 / 3) * (t(3)  t(2)) * ViX, p2.Y  (1 / 3) * (t(3)  t(2)) * ViY))
PointList.Add(p2)
Return PointBezierFunction(PointList, u)
End Function
I assumed that t(), and array of equal length to the number of points, would be the same in both X and Y directions. Now, how should adjust my t() values, and should they be dependent on the values X and Y, meaning that they have one value for X and one value for Y? And, is the implementation correct?






Does anyone have any comment on my factoring algo below:
A new method to factor large semi primes.
The plan is to factor large semi prime numbers. Start by picking 2 Prime Numbers of
appropriate digit size. Choose one as a 200 digit Prime Number, and the other as a 300
digit Prime Number, where their product would be a 500 digit prime number. Then convert
the two decimal digit prime numbers to binary values, and multiply them to form the binary
semi prime product. Calculate the bit size of the binary product. Save a copy of this semi
prime into another value whose bit size is the size of the semi prime rounded up to a
DWORD (mod 32 bit) bound, but left shift it until it fills the largest DWORD. Then save
another copy of this left shifted semi prime, but this time right shifted by 1 bit. These
copies of the semi prime will be used to compare the products of the highest test DWORDS
calculated in the algorithm below (see below for the explanation of why two copies are
used).
The semi prime product would then be factored using a new algorithm.
It has been suggested (by S. C. Coutinho in "The Mathemetics of Cipher", Chapter 11) that
to select the prime numbers for a key of a digit size of r, select one between a digit
size of ((4 / 10) * r) and ((45 / 100) * r). The starting point for the factoring will be
chosen as the middle value between these limits, and will step alternately higher and
lower until the factors are discovered. The first starting factor will then be the p (the
smaller number), and the other starting factor will be q (the larger number) which will be
set to ((10^r ) / p).
With an r of 500, the lowest limit (in digits) will be:
p = ((4 / 10) * r)
p = ((4 / 10) * 500)
p = (4 * 50)
p = 200 digits
The upper limit will be:
p = ((45 / 100) * r)
p = ((45 / 100) * 500)
p = (45 * 5)
p = 225 digits
The midpoint will be:
P = ((200 + 225) / 2)
p = 212 digits
Thus the starting p is a 212 digit number. The other starting factor will be:
q = ((10^r) / p)
q = ((10^500) / (10^212))
q = 288 digits
The p value is then the calculated digit size (converted to bits and then DWORDS), but
the q value must be calculated by subtracting the selected p value bit size from the
selected semi prime bit size (and then rounded up to DWORDS). These bit sizes define bit
fields that will be filled in from both ends with actual values during the factoring
process.
The bit size, digit size, and DWORD sizes of these values are as follows (all of these
powers of 2 are the largest exponent that would not exceed the associated power of 10,
and the DWORD count is the minimum count of DWORDS to contain that binary bit count):
2^707 = 10^212 = 23 DWORDS starting low
2^960 = 10^288 = 30 DWORDS starting high
2^999 = 10^300 = 32 DWORDS max high
2^1664 = 10^500 = 52 DWORDS max product
Three sets of precomputed tables must be constructed to aid (actually enable) the
factoring of this semi prime.
The first table is saved as 32 bit odd factors (DWORDS), which, when multiplied and
masked to a DWORD, give the same masked product value. This table is saved as multiple
files with names matching the masked product value.
A second, similar, table is constructed for factors which all have the most significant
bit set and saved as files with names matching the highest 32 bits of the product value
(masked to 32 bits).
The third table is just a list of the prime numbers to be considered (all prime numbers
less than 300 decimal digits), but saved in a special way as a multilevel directory
structure as described below.
The entries for this third table consist of DWORD pairs where one of the DWORDS is the
lowest DWORD of the prime number and the other DWORD is the highest 32 bits of the prime
number. All prime numbers that share both of these end values and have the same bit size
will be saved in a directory whose name matches the two DWORDS (high DWORD value and low
DWORD value). The directory sets will be saved under a directory that relates to the bit
size.
One bit of explanation about the information that follows. When a string is enclosed in
quotation marks (""), it is an encoded alphanumeric string. When the characters are not
enclosed in quotes, then the characters are treated as independent decoded DWORD values.
At this point, a diagram may be appropriate. Consider a large prime number with DWORDS
indicated as Hx and Lx, and bit size as a 3 BYTE value indicated by Sx:
H1 H2 H3 H4 ... L4 L3 L2 L1 length Sx
The lowest level directory structure would be (where "Sx" are the different prime number
bit sizes from 2 to 1664):
"Sx"
"Sy"
...
"Sz"
Beneath each of these size directories would be a layer of directories named "H1x L1x"
(whose prime number bit size is "Sx" and where "H1x" and "L1x" are the first level
different highest and lowest DWORD values of the semi prime):
"H1x L1x"
"H1y L1y"
...
"H1z L1z"
Beneath each of these directories would be another layer of directories named "H2x L2x":
"H2x L2x"
"H2y L2y"
...
"H2z L2z"
Essentially, you are creating piecemeal path names on some drive or drives (at some
directory level) as a relative path name:
".\Sx\H1x L1x\H2x L2x\H3x L3x\H4x L4x\ ..."
".\Sx\H1y L1y\H2y L2y\H3y L3y\H4y L4y\ ...
...
".\Sx\H1z L1z\H2z L2z\H3z L3z\H4z L4z\ ..."
".\Sy\H1x L1x\H2x L2x\H3x L3x\H4x L4x\ ..."
".\Sy\H1y L1y\H2y L2y\H3y L3y\H4y L4y\ ...
...
".\Sy\H1z L1z\H2z L2z\H3z L3z\H4z L4z\ ..."
".\Sz\H1x L1x\H2x L2x\H3x L3x\H4x L4x\ ..."
".\Sz\H1y L1y\H2y L2y\H3y L3y\H4y L4y\ ...
...
".\Sz\H1z L1z\H2z L2z\H3z L3z\H4z L4z\ ..."
One other thing to note is that this method greatly reduces the duplication of some of
the DWORD values that would otherwise occur if the actual prime numbers were given as
just a list of full N digit values, i.e., consider 32 DWORD prime numbers of the
following form (where each prime number contained the same low 4 DWORDS and same high 4
DWORDS):
H1 H2 H3 H4 ... ... ... L4 L3 L2 L1
The "... ... ..." represent 24 DWORDS, some of which must be different (remember, this
is from a list of prime numbers that are all unique), yet H1 H2 H3 H4 and L4 L3 L2 L1
DWORDS would all be the same and thus not duplicated in this level structure. In the
maximum duplication form, the H1 through H15 and L1 through L15 DWORDS could all be the
same, and only the H16 and L16 DWORDS would be different.
Note that the data is not really binary data, but directory names which must be formed
with alphanumeric or special characters. What will be done to reduce the size of the data
is that the numeric DWORD values will be converted to encoded character values. Several
things were considered in this quest. If decimal conversion were done (09), then the
DWORDS would need 10 characters for each DWORD or 20 characters total. If 4 bit hex
conversions were used (09 and AF), then the DWORDS would need 8 characters for each
DWORD or 16 characters total. If 6 bit ASCII64 encoding were used, then the DWORDS would
need 6 characters for each DWORD or 12 characters total but the resulting upper and lower
case letters would require the use of Unicode filenames, and this alone would double the
size of the data to 24 BYTES. Using a character set of upper case and numeric characters
and allowed special characters (26 + 10 + 16 or a 52 character conversion set) would allow
6 character conversions for each DWORD or 12 characters total, but would require divisions
to be used to extract the actual character indexes. Using a 32 character conversion set
(AZ and 05) would result in 7 character conversions for each DWORD (splitting each
DWORD into seven 5 bit fields and then encoding each 5 bits) giving 14 characters for
both DWORDS, but since it would be better to treat both DWORDS as a single QWORD and
split the 64 bits into 13 parts saving even one more character, that method will be
selected. A multilevel table lookup will be used to convert the encoded 13 characters
back into a binary value (a column of 32 QWORD values indexed by the character value, and
13 columns of these QWORD values indexed by which character of the encoded value was
being decoded, all 13 resulting values would be accumulated by adding to yield the QWORD
binary value, then split to two DWORDS).
The size values in the first (lowest level) directory (three BYTE size values) also need
to be converted to character values. For small prime number size values (2, 3, 4, ...)
this would create character strings such as "AAAAB", and there would also be no need for
the High DWORD, thus such small prime numbers would be saved as only a truncated size
field (delete any leading 'A' characters) and would only contain a single DWORD value in
the next lower directory level (second level) such as:
First Second
Level Level
"B" for 2 bits
"AAAAAAB" for prime number 2
"AAAAAAC" for prime number 3
"C" for 3 bits
"AAAAAAE" for prime number 5
"AAAAAAG" for prime number 7
"D" for 4 bits
"AAAAAAK" for prime number 11
...
...
No attempt will be made to delete the leading 'A' characters in these short prime numbers.
When the bit size exceeds 32 bits, the second DWORD would be created to precede the first
DWORD in the directory name. This second (most significant) DWORD would consist of the
most significant 32 bits of the prime number and would always have the most significant
bit set (always be at least an encoded "B..."), and would result in an encoded 13
character directory name.
The max size of a prime number of 300 digits is 32 DWORDS or 16 QWORDS which would convert
to a directory name of 232 characters ((13 + 1) * 16) + (5 + 1) + (1 + 1)) (max directory
name size plus a separator times 16 sub directory levels plus size directory name plus a
separator plus the drive name plus a separator). This is well within the MAX_PATH for a
directory entry which is 260 BYTES so unicode path names are not required.
The factoring algorithm starts out by factoring the semi prime's lowest DWORD and highest
32 bits into lists of DWORD pairs (masked to a low DWORD for the low DWORD pair and to a
high DWORD for a high 32 bit pair) whose DWORD products match the appropriate end values
of the semi prime. Prime numbers whose ends match these pairs are the only prime numbers
that need to be considered in the factoring process, all other prime numbers cannot
possibly be the correct values. All four of the possibilities must be considered (one at
a time) when considering which value of a pair belongs to the larger prime number and
which value belongs to the smaller prime number. Assume that the end values for the first
(odd number) table pair values were A and B and the second (MSD bit set) table values were
C and D. The possible test primes would then be:
Small and Large
C ... A and D ... B
C ... B and D ... A
D ... A and C ... B
D ... B and C ... A
Start the building process by creating four number buffers (bit fields) whose bit size is
the bit size of the semi prime extended to a mod 32 bit size. Now process each of the
above mentioned 4 possible pairs, let us say, starting with the C ... A and D ... B pair.
Multiply them as follows:
C ... A
* D ... B

RS = B * A (R ignored)
TU = D * C (U ignored)
One word about my selection of variable names R, S, T, U, W, X, Y, and Z (in the
example above and in the example below). I did not use the letter "V" because in my
text editor (PFE), while using the OEM fixed pitch font, the "U" (Uniform) and "V"
(Victor) letters appear almost the same and are easily confused.
Now, S would be equal to the lowest DWORD of the semi prime because the A and B characters
were entries in the factor table for the lowest DWORD in the semi prime, but R needs to be
calculated and saved. T would be equal to the highest 32 bits of the semi prime for the
same reason and U needs to be calculated. S and T will always match the end DWORDS of the
left shifted semi prime (also, see below). The R must be calculated as (A * B) >>= 32 and
T must be calculated as (C * D) >>= 32. This will be done once as the factor pair lists are
read and the R values and U values (1 for each pair) saved in two arrays for later use.
Look up the directory ".\Ss\C A" and the directory ".\Sl\D B" (where "Ss" is the small
test prime number size and "Sl" is the large prime number size), and the lowest of their next
level subdirectories (treat the subdirectories as a list of possible useful matches). If
either such directory or subdirectory entry does not exist, then skip to the next low or
high list entry and retry.
Access the first of the ".\Ss\C A" subdirectory entries and assume that it contains the 2
DWORDS as E F and that the first of the ".\Sl\D B" subdirectory entries contains the 2
DWORDS as G H. Combine the sub directory values with the parent directory values to
continue building the prime numbers as:
C E ... F A
* D G ... H B

Multiply F A and H B and mask the product to a QWORD and check if the result matches the
least significant QWORD of the semi prime:
F A
* H B

RS = B * A (already computed above)
TU = B * F (T ignored)
WX = H * A (W ignored)
YZ = H * F (YZ ignored)

IJKL (IJ ignored)
K ((R + U + X) / DWORD) should match second lowest DWORD of the semi prime. If not, select
the next low pair and try again,
Multiply C E and D G and check if the most significant 64 bits of the product matches the
most significant 64 bits of the semi prime:
C E
* D G

RS = G * E
TU = G * C
WX = D * E
YZ = D * C (already computed above)

MNOP (OP ignored)
MNOP may possibly need to be adjusted because the product may just be 127 bits and not 128
bits. If and only if (IFF) the most significant two bits of C and D are both set, then the
product would be 128 bits, otherwise it would be 127 bits and require a shift to be
correct for a compare with the high semi prime value (the semi prime high end would be
already left shifted to set the most significant bit of a DWORD). Because of this anomoly,
all products starting with these pairs (for all 16 levels) must be left shifted by one
bit. This is time comsuming. To avoid this, the left shifted semi prime has been also
saved as a right shifted by one bit value. When the high factor pair is first selected, a
check will be made and the correct pointer to the semi prime check value will be
established for that pair, and product shifting will not be required.
M and N should match the highest 64 bits of the check semi prime. If not, select a new
high pair and reset the low pair to the beginning (working with a new high pair) and start
again until both pairs match. Note that O and P cannot be checked at this time because it
is unknown how much the carry DWORDS will be until the next level testing can select the
third DWORD pair.
If the end of either factor list is reached before the double match, change which of the
four initial pairs is being used, and if all four pairs have been checked, then change the
test number bit sizes (alternate lower and higher around the smaller value midpoint that
was selected and then recalculate the larger number bit count), and reset the lists and
start again until the actual factors are found.
If both end points match then check if you are at the lowest level. Do this by maintaining
the bit sizes by decrementing by 64 as you go to each next level  when the smaller test
number lower size is less than 64, then hold the smaller test number and just process the
larger test number for subsequent levels.
If both sizes are less than 64 then you have the final test primes. Just multiply out the
DWORD values (to get all middle carries) to verify whether this is a valid solution. To
get these values (they are split into 4 parts during this testing), the low DWORD values
are correct, it is only the high values that need to be adjusted and concatenated to the
top of the low values and then multiplied. To do this for the smaller test value, subtract
the bit size of the smaller test value lower test DWORDS (DWORD count * 32) from the total
bit size of the smaller test value. Divide the result by 32 (a shift works really well)
and what remains is the bit count to right shift the smaller test value high DWORD values
deleting low bits that are shifted out of the lowest DWORD value in the high piece (use
right shift double on pairs of adjacent DWORDS), then concatenate the result to the low
DWORDS. Do the same thing to adjust the larger test value high DWORDS. Then multiply and
check the product with the semi prime itself.
If both sizes are > 63 then drop down both of the directory levels and get the next two
pair of DWORDS for the test prime numbers and extend the multiplies by a DWORD each and
check again until you have the final test primes.
Obviously, quit when you find the two final prime numbers that, when multiplied, result in
the semi prime product.
Dave.





That's bigger than the common homework questions here. Is it for your thesis?





Bernhard,
No this is not for any homework assignment. It is just an attempt to find another algorithm for factoring numbers  primarily to factor a semi prime. I know that this will be Big Data, but before I start implementing, is there any problem within the algorithm itself?
Dave.






Member 4194593 wrote: The third table is just a list of the prime numbers to be considered (all prime numbers less than 300 decimal digits), but saved in a special way as a multilevel directory structure as described below.
A quick consultation with the revered Dr Riemann indicates that there are over 10^297 primes less than 10^300. Since there are somewhere on the order of 10^80 atoms in the known universe, your compression scheme must achieve a minimum compression ratio in excess of 10^217 : 1. Can't see that happening in a hurry... Sometimes "big data" is just too big.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Peter,
I was waiting for someone to figure that out. Check with Ravi, I proposed this to him in Email in March. The following is the second half of the original Email to him. Too bad this would not work, I think that the actual algo would work except for the fact that the file requirements are insane:
<pre>

The following discussion is why this algorithm (the part above this point) should be
published in Algorithms on April 1st without this lower section  let the wrecking crew of
experts in CP determine what is so wrong with the solution (the algorithm is correct but
just will not work).

Now, the number of primes with 300 digits is:
((10^300) / ln(10^300)) = (3.33 * (10^297))
The max size of a prime number in that range is 52 DWORDS or 26 QWORDS which would convert
to 232 characters so the total size of all path names describing all of these prime
numbers will be:
(232 * 3.33 * (10^297)) = (7.7 * (10^299)) BYTES
The number of terabytes to contain this data would be:
(7.7 * (10^299) / (10^12)) = (7.7 * (10^287)) TB
The number of 4 TB drives to contain this total size would thus be:
((1/4) * 7.7 * (10^287)) = (1.9 * (10^287)) drives
I don't think that Seagate has enough material to create that many drives. Penrose in his
book "The Emperor's New Mind" estimated that the number of particles in the visible
universe was 10^80, and after that problem has been solved, you still have another
problem about how much time it would take to write out these prime number directories to
the drives. So much for having the prime numbers available to use as trial multipliers to
see if the product matches the semi prime.
Lest you think it would make a difference (in the count and total size of the prime
numbers) if you only saved the prime numbers that lie between 10^200 and 10^300:
((10^300) / ln(10^300)  (10^200) / ln(10^200)) =
(1.447 * (10^297)  2.171 * (10^197)) =
(1.447 * (10^297))
or
(1.447 * (10^297)) = (1.447 * (10^297))
Saving the number of prime numbers in a range of digits (from 200 to 300) would not even
make a visible dent in the count because the actual calculation is subtracting a 197 digit
number from a 297 digit number, leaving a 297 digit number with a diminished lowest value
(the last 196 digits if a borrow was needed else 197 digits), but the most significant 100
(or at least 99) digits would be unchanged. Only if you displayed the result in its actual
297 digit decimal form (instead of in truncated scientific notation) would you be able to
see the difference.
Another way to visualize this is to see how big a block of (1.9 * (10^287)) 4 TB drives
would be. Now my drives are (in inches) about 1.75 * 4.75 * 7. The diagonal from the
center of the drive to any corner is:
((((1.75 / 2)^2 + (4.75 / 2)^2 + (7 / 2)^2))^(1 / 2))) = 4.32 inches
A slightly smaller version of the block of drives would be to block the drives as a 4 wide
and 2 deep block, almost making them a cube. This would group the drives in blocks of 8
drives giving a total number of blocks as:
((1.9 / 8 * (10^287))) = (2.375 * (10^286)) blocks
The diagonal from the center of the block to any corner is:
((((1.75 * 4 / 2)^2 + (4.75 * 2 / 2)^2 + (7 / 2)^2))^(1 / 2))) = 6.86 inches
The dimensions (in the number of blocks) of a block with the same number of blocks in each
row, column, and layer would be:
((2.375 * (10^286))^(1/3)) = (2.874 * (10^95))
The diagonal from the center of this block to any corner would be
(6.86 * 2.874 * (10^95)) = (1.97 * (10^96)) inches
((6.86 * 2.874 / 12) * (10^95)) = (1.64 * (10^95)) feet
((6.86 * 2.874 / 5280) * (10^95)) = (3.11 * (10^91)) miles
The radius of a sphere that would enclose this block of drives would then be:
(3.11 * (10^91)) miles.
The radius of the the sphere that would enclose our solar system would be equal to the
orbit size of the planetoid Pluto (I guess I should be politically correct and not call
it a planet anymore) which is:
4.583 billion miles
or
(4.583 * (10^9)) miles
How much bigger would the radius of the sphere enclosing the block of drives be relative
to the radius of the sphere enclosing the solar system? To compare two spheres for "which
one is bigger, and by how much", consider comparing a baseball with a basketball. You
compare their height or width differences and not their circumference or volumes
differences (whether you are comparing diameters or radii), so try:
((radius of drives enclosing sphere) / (radius of solar system enclosing sphere))
((3.11 * (10^91)) / (4.583 * (10^9))) = (6.79 * (10^81))
It would be (6.79 * (10^81)) times as big as the radius of our solar system, and,
furthermore, you would be filling that space with 4 TB drives instead of the almost
complete vacuum that exists there today.
Where would you get that much material to build the drives? Maybe we could be seeing some
quantum sharing of particles between (10^287) different drives, all at the same time,
without dropping any recorded bits??? Let's not even consider the power requirements, or
the write time to populate the directory entries before the factoring algorithm starts.
</pre>
Dave.





Hi,
I want to make a simple algorithm for F1 Challenge. Basically i want to make an external program that takes my race results and at the end of the season i get "contract offers" depending on how well i did. I already know how i want it to work, i just want to know if there's already an algorithm similar to this?





This hardly needs an algorithm. It is just a matter of adding up all the points and sorting the drivers and teams into ascending order by points.





I use Carbonite backup service on my main machine. Carbonite claims that they are able to transmit only changes in a file to their servers so that the whole file needn't be transmitted again when there are changes to that file.
How are they doing this? Wouldn't they need to compare the changed file against a complete copy of the original? And to do this across the net would require transmitting the entire file anyway, right?
The difficult we do right away...
...the impossible takes slightly longer.





Well I guess only Carbonite could give a definitive answer. But, it is possible they are keeping a copy of the file's directory information so they can just compare disk segments, and they backup by copying the segments direct from disk. Then on the next incremental backup they only need to copy any segments that have been added since the previous backup.





That is likely to miss changes to files that do updates in place with mapped files...





Which is why I added the initial sentence to my response.




