For my old computer I need an assembler which is able to take assembled code from a library and link it together in the smallest possible combination.
One 'speciality' of the old CDP1802 processor will force me to write the assembler and linker myself. There are two types of branching instructions: long branches and short branches. Long branches use full 16 bit addresses, but will cause timing issues with the graphics chip. This is an ancient hardware bug.
This is the reason why i must use short branches with short 8 bit addresses. The upper 8 bits are just assumed to remain the same as in the instruction's address. This way memory is segmented into 256 byte blocks. It's not a very strict segmentation as the code can run across the boundaries without any consequences, You just can't loop back with a short branch and long branches can't be used.
The linker will have to puzzle together snippets of code and data with this in mind. At the same time I must be sure that memory usage is as low as possible in the end. My old computer has only 4k RAM, and more than 16k is quite unusual.
The only thing I can think of is to make a memory map of each possible combination and take the one which needs the least amount of memory. There are easily hundreds of small code snippets to be linked and blindly testing every combination will be very slow and inefficient.
First thought: Build a tree with only valid options and then find the branch with the lowest byte count. This is alresy better than brute force, but I hope there is still a more elegant algorithm for this.
Assuming that you subdivide the code into N snippets, each terminated by an unconditional jump (e.g. procedures), you could test each possible sequence out of the N! possibilities.
Note that the maximum savings in bytes that you could achieve are the number of jumps that may be converted from 16-bit form to 8-bit form. If this number is smaller than the length of the smallest code snippet, you would not be able to use any sort of pruning of the search tree, but would be forced to evaluate all N! leaves of the tree, which might take a long time...
Borland's Turbo Assembler (for x86 processors) had an option whereby it attempted to optimize (conditional) jumps:
1. All jumps were written without qualifiers.
2. The assembler would make multiple passes through the code, applying the following algorithm:
a. If a jump target was within +127/-128 bytes, output a short (2-byte) jump.
b. If an unconditional jump target was outside that range, output a 3-byte jump.
b. If a conditional jump target was outside that range, output a 5-byte sequence - jump <inverted condition=""> over the following jump (2 bytes) / jump unconditional to the target (3 bytes).
This was applied in a loop until either no more jumps could be optimized or a predetermined number of loops was reached. Typically, only 2-3 loops were necessary.
In addition to the automatic method given above, I would try to write each procedure so that the jumps are all 8-bit forms. Optimizing a procedure by hand is likely to be much easier than attempting global optimization.
You have a misunderstanding about this -- the CLR doesn't understand any languages -- it's more like the languages understand the CLR, but even that is a misleading description.
Furthermore, most members here on CP (me anyway) are just ordinary developers who do not know (or care) anything about how the deep internals work. You would probbaly need to contact the developers at Microsoft to gain the level of detail you desire.
The things you are asking about are way beyond what is required for day-to-day development of commercial and enterprise applications. If you truly want to understand how it all works, you will likely need a doctorate degree.
I can't find anything wrong to understand things that have developed successfully already, i need your help if you already know it then just you can share it that will increase your's knowledge level also Mr PIEBALDconsult and i need to tell you one thing Sir Isaac Newton didn't have a doctorate degree when he found the gravitational force at all, it can mean you that doctorate degree is not necessary at all to become a big man in knowledge like Sir Isaac Newton.
Hi friends, need help!! i am absloute beginner and need advise. below algorithm is an extrat from a text book but when i try to apply and solve the problem on paper i see that this algorithm will fail. as i get a remainder 0 every digit i key in till 8 (i took number 8 as an examole and applied below)...please advise....also i found that applying this algorithm on number 2 would result in 0 as well and if it is 0 then not prime, then how is this algorithm correct!
2 read the number num
i <--2, flag <--1
4 repeatnsteps 4 through 6 unitl i
4th step - repeat steps 4 through 6 unitl i <num or flag =0 5) rem <--num mod i 6) if rem=0 then
flag<-- 0 else i<--i+1 7) if flag =0 then print number is not prime else prit number is prime 8) stop
in this step if i use number 2 as an example it would result in 0 which will result it number being non-prime.
Ok, now it makes more sense. The problem here is that 2 is a special case, and they did not handle it. Simply add a step: if the number is two, it is prime.
So in short, yes, the book is wrong and you are right.
edit: ok now it makes less sense, why are most of the steps gone again? What I wrote above should still apply though.
Thank you , i was editing so the steps can be read properly....atleast i know that i was not wrong. i appricate you help and gives me confidence that peoople online can help me with my problems while i am learning.
so assuming the number 2 in these steps in not properly defined and so if we move in the sequence and found another number to be divisible and remainder to be 0 (we should then in this case consider it to be correct) and therefore 99 would not show up as prime in when the algorithm runs, is that correct?
i found the soltion: The flag value can either be 1 or 0 in the course of this program. Flag is just a variable. This is tested in step 7 to determine if the number is prime or not prime. flag value is initially set to 1(during initialization). It may or may not change at step 6 depending on whether rem=0. The value of rem is 0 when the result of the calculation at step 5 is 0 (happens when the division produces no reminder). as you know by doing num mod i we divide num by the current value of i and check the reminder. If the reminder is 0 that means num is divisible by i. So num cannot be a prime. Whenever we find that reminder is 0 we set value of flag=0 which means the number is not prime.
the algorithm in plain english is
(0) initially set i=2 and flag=1
(1) take a number (num) which we want to test for being prime.
(2) Then we start dividing the number num successively by i= 2, 3, 4.....upto (num-1) and check for the reminder. At each iteration we increase i by 1.
(3) At any stage, if we find that num is divisible by i then (num is not prime) set flag=0
(4) If we reach i= (num-1) and still reminder is never 0 at any stage, then flag value remains unchanged at 1 and the number is prime
We test the flag value at the end of the program to decide whether num is prime or not.
As to the missing code, you need to use the Encode button so your angle brackets don't get interpreted as XML/HTML.
2 read the number num
i <--2, flag <--1
4 repeat n steps 4 through 6 unitl i <num or flag =0
5 rem <--num mod i
6 if rem=0 then
7 if flag =0 then
print number is not prime
print number is prime
I've prototyped a way to do pattern discovery using SQL, but I still have a poor understanding of where this method fits in the data mining vernacular.
Being set-based, I'm not building a tree, although a functional tree does arise in the result set.
1) Build a "look up" table, by doing a cross join, yielding a combinatorial "dictionary" (a rainbow table) of n-gram "words."
2) Get COUNT(*) > n , using SQL GROUP BY, matched against a large table of items
3) Further look for equivalent longer self-matches within the result set.
My seed table is 177 items, allowed to cross-join itself 3x, for a final table 2.8 million 3-gram words (takes about 35 seconds to build this table in Postgres).
The actual itemset table is 10 million rows in series (serially numbered), although the actual number of itemsets might be considered smaller.
I've recorded 35** seconds on the join between the two original tables, yielding all the simple repeating 3-grams meeting group by's count(*) > x (that's the dictionary joined to the itemsets.
That's the Q&D discovery step, and then subsequent steps simply apply a self-join for longer-chained repeating series. These have been pretty quick, in the 50 millisecond range.
My questions are:
1) What's the best way to describe this algorithm? Frequent pattern? Motif?
2) It's a simple enough method, but is it fast enough for general use? I.e. other data mining apps where performance requirements are different from my own?
3) I've wondered if SQL could be convinced to pattern-match like an LCS dynamic programming algorithm, by matching across gaps in the sequence, with maybe a lookup table of allowable variances & distance between matching values?
**Right now I'm seeing 50 seconds after the buffers load, but I reinstalled Postgres & my postgres.conf file apparently is all defaults now (the postgres process back to only 16 mb getting buffered, so it's suffering more I/O to the SSD drive).
seed table is 177 items, allowed to cross-join itself 3x, for a final table 2.8 million 3-gram words
actual itemset table is 10 million rows
Ehm, 2.8 million is about half of 177*177*177 - that's quite a lot, but does still fit into memory (RAM). With 10 million rows, your trigram table will have some 10^20 rows, and that's beyond the memory of any machine nowadays, even beyond the capacity of any hard disk.
It won't work (already that factor of 10^14 applied to the present 35 seconds should tell you that).
Oh, I forgot to mention that the trigram dictionary is trimmed by abs(val1+val2+val3) <= 88 (it's a vector dataset of small int). But the 5.5m trigram dictionary might not slow things much given the use of covering indices (the access is all via b-tree indices, obviating the need for as much memory).
I looked into using a 4-gram dictionary but it presented a very large table, much larger than the 3-gram dictionary, and worse it made for more overlapping duplicates in building the equivalent of an FP-tree (at least 1 extra overlap, whereas w/ the 3-gram matches I'm always overlapping by n+2). Also I sense a trigram-based tree innately reflects the smallest useful vector of from-and-to applicable.
One problem might be that in high-frequency datasets I could see an explosion of 3-gram noise that doesn't always support better (longer) matches, bloating the output. I understand that in FP-Tree algo's there's a minimum support criterion that perhaps works around this. There may be a way to ameliorate this in SQL, such as checking for matching adjacencies in a manner that'll optimize via a correlated subquery, using ANY or [NOT] IN.
I haven't had enough time to fully experiment with various datasets, I've been going through a application language selection process** & am contemplating looking into PostGIS' geometric data & index features (R-Tree indices) as a way to get better, longer string matches, even perhaps supporting approximate or intermittent matches akin to the ability of LCS/cosine match algorithms (but on larger sets with more expressive syntax).
I'm prepared to start coding in C or Julia, but I'll avoid it if Postgres proves "fast enough." That's b/c as new data are imported to the DBMS I'll want to rerun pattern discoveries in the background against the main data store. My current 10 million rows are an exorbitant sample, out-scaling anything I expect to encounter in the actual data (MIDI note vectors).
It's a very old paper (circa 2001?), but I'll probably be following their methodology. Maybe they gave up b/c DB/2 was too slow vs. algorithmic FP-Growth in C++.
Also, from a 2006 paper: http://webdocs.cs.ualberta.ca/~zaiane/postscript/adma05.pdf
"...In this work we presented COFI-Closed, an algorithm for mining frequent
closed patterns. This novel algorithm is based on existing data structures FP-tree
and COFI-tree. Our contribution is a new way to mine those existing structures
using a novel traversal approach. Using this algorithm, we mine extremely large
datasets, our performance studies showed that the COFI-Closed was able to
mine efficiently 100 million transactions in less than 3500 seconds on a small
desktop while other known approaches failed..."
I know that's old hat by now (my 2008-era Thinkpad T400 Celeron laptop w/ its 4GB RAM & SSD drive vs. his "small desktop"), but I'm in the ballpark.