Ok, now it makes more sense. The problem here is that 2 is a special case, and they did not handle it. Simply add a step: if the number is two, it is prime.
So in short, yes, the book is wrong and you are right.
edit: ok now it makes less sense, why are most of the steps gone again? What I wrote above should still apply though.
Thank you , i was editing so the steps can be read properly....atleast i know that i was not wrong. i appricate you help and gives me confidence that peoople online can help me with my problems while i am learning.
so assuming the number 2 in these steps in not properly defined and so if we move in the sequence and found another number to be divisible and remainder to be 0 (we should then in this case consider it to be correct) and therefore 99 would not show up as prime in when the algorithm runs, is that correct?
i found the soltion: The flag value can either be 1 or 0 in the course of this program. Flag is just a variable. This is tested in step 7 to determine if the number is prime or not prime. flag value is initially set to 1(during initialization). It may or may not change at step 6 depending on whether rem=0. The value of rem is 0 when the result of the calculation at step 5 is 0 (happens when the division produces no reminder). as you know by doing num mod i we divide num by the current value of i and check the reminder. If the reminder is 0 that means num is divisible by i. So num cannot be a prime. Whenever we find that reminder is 0 we set value of flag=0 which means the number is not prime.
the algorithm in plain english is
(0) initially set i=2 and flag=1
(1) take a number (num) which we want to test for being prime.
(2) Then we start dividing the number num successively by i= 2, 3, 4.....upto (num-1) and check for the reminder. At each iteration we increase i by 1.
(3) At any stage, if we find that num is divisible by i then (num is not prime) set flag=0
(4) If we reach i= (num-1) and still reminder is never 0 at any stage, then flag value remains unchanged at 1 and the number is prime
We test the flag value at the end of the program to decide whether num is prime or not.
As to the missing code, you need to use the Encode button so your angle brackets don't get interpreted as XML/HTML.
2 read the number num
i <--2, flag <--1
4 repeat n steps 4 through 6 unitl i <num or flag =0
5 rem <--num mod i
6 if rem=0 then
7 if flag =0 then
print number is not prime
print number is prime
I've prototyped a way to do pattern discovery using SQL, but I still have a poor understanding of where this method fits in the data mining vernacular.
Being set-based, I'm not building a tree, although a functional tree does arise in the result set.
1) Build a "look up" table, by doing a cross join, yielding a combinatorial "dictionary" (a rainbow table) of n-gram "words."
2) Get COUNT(*) > n , using SQL GROUP BY, matched against a large table of items
3) Further look for equivalent longer self-matches within the result set.
My seed table is 177 items, allowed to cross-join itself 3x, for a final table 2.8 million 3-gram words (takes about 35 seconds to build this table in Postgres).
The actual itemset table is 10 million rows in series (serially numbered), although the actual number of itemsets might be considered smaller.
I've recorded 35** seconds on the join between the two original tables, yielding all the simple repeating 3-grams meeting group by's count(*) > x (that's the dictionary joined to the itemsets.
That's the Q&D discovery step, and then subsequent steps simply apply a self-join for longer-chained repeating series. These have been pretty quick, in the 50 millisecond range.
My questions are:
1) What's the best way to describe this algorithm? Frequent pattern? Motif?
2) It's a simple enough method, but is it fast enough for general use? I.e. other data mining apps where performance requirements are different from my own?
3) I've wondered if SQL could be convinced to pattern-match like an LCS dynamic programming algorithm, by matching across gaps in the sequence, with maybe a lookup table of allowable variances & distance between matching values?
**Right now I'm seeing 50 seconds after the buffers load, but I reinstalled Postgres & my postgres.conf file apparently is all defaults now (the postgres process back to only 16 mb getting buffered, so it's suffering more I/O to the SSD drive).
seed table is 177 items, allowed to cross-join itself 3x, for a final table 2.8 million 3-gram words
actual itemset table is 10 million rows
Ehm, 2.8 million is about half of 177*177*177 - that's quite a lot, but does still fit into memory (RAM). With 10 million rows, your trigram table will have some 10^20 rows, and that's beyond the memory of any machine nowadays, even beyond the capacity of any hard disk.
It won't work (already that factor of 10^14 applied to the present 35 seconds should tell you that).
Oh, I forgot to mention that the trigram dictionary is trimmed by abs(val1+val2+val3) <= 88 (it's a vector dataset of small int). But the 5.5m trigram dictionary might not slow things much given the use of covering indices (the access is all via b-tree indices, obviating the need for as much memory).
I looked into using a 4-gram dictionary but it presented a very large table, much larger than the 3-gram dictionary, and worse it made for more overlapping duplicates in building the equivalent of an FP-tree (at least 1 extra overlap, whereas w/ the 3-gram matches I'm always overlapping by n+2). Also I sense a trigram-based tree innately reflects the smallest useful vector of from-and-to applicable.
One problem might be that in high-frequency datasets I could see an explosion of 3-gram noise that doesn't always support better (longer) matches, bloating the output. I understand that in FP-Tree algo's there's a minimum support criterion that perhaps works around this. There may be a way to ameliorate this in SQL, such as checking for matching adjacencies in a manner that'll optimize via a correlated subquery, using ANY or [NOT] IN.
I haven't had enough time to fully experiment with various datasets, I've been going through a application language selection process** & am contemplating looking into PostGIS' geometric data & index features (R-Tree indices) as a way to get better, longer string matches, even perhaps supporting approximate or intermittent matches akin to the ability of LCS/cosine match algorithms (but on larger sets with more expressive syntax).
I'm prepared to start coding in C or Julia, but I'll avoid it if Postgres proves "fast enough." That's b/c as new data are imported to the DBMS I'll want to rerun pattern discoveries in the background against the main data store. My current 10 million rows are an exorbitant sample, out-scaling anything I expect to encounter in the actual data (MIDI note vectors).
It's a very old paper (circa 2001?), but I'll probably be following their methodology. Maybe they gave up b/c DB/2 was too slow vs. algorithmic FP-Growth in C++.
Also, from a 2006 paper: http://webdocs.cs.ualberta.ca/~zaiane/postscript/adma05.pdf
"...In this work we presented COFI-Closed, an algorithm for mining frequent
closed patterns. This novel algorithm is based on existing data structures FP-tree
and COFI-tree. Our contribution is a new way to mine those existing structures
using a novel traversal approach. Using this algorithm, we mine extremely large
datasets, our performance studies showed that the COFI-Closed was able to
mine efficiently 100 million transactions in less than 3500 seconds on a small
desktop while other known approaches failed..."
I know that's old hat by now (my 2008-era Thinkpad T400 Celeron laptop w/ its 4GB RAM & SSD drive vs. his "small desktop"), but I'm in the ballpark.
Perhaps I'm not thinking about it right (I have a cold so I'm not particularly sharp right now), but it seems to me that if the item is in the second half of the array, it will essentially devolve into a linear search.
Yes ok, I guess the cold got to me. I was thinking something weird based around the assumption that mid started in the middle, which it obviously doesn't.
So it works, that's good. It seems closely related to the variant which keeps a "midpoint" and a "span" (here it's the next span (mid), and the midpoint plus that next span (i)). Same pros and cons too (needs fixup at the end, but inner loop is simple), the "midpoint/span" variant is usually seen (when seen at all) in its "more complicated math in the inner loop"-form which doesn't need fixup, but then what's the point.
Using a midpoint and a span is slower because it requires 2 assignments per loop opposed to 1.5 (on average) in my implementation.
I assume it's the fixup and assignment issue that left academics stumped for the past 60 years. There are also caching issues for larger arrays, for which I've created a variant that is mindful in that regard.
In a quick test, GCC didn't feel like using cmovs for plain old "left and right bounds" binary search. You can easily measure a huge difference due to that sort of thing, and I'm not sure that's 100% fair, after all you could implement ye olde binary search with cmovs.
This approach has been done before, e.g. the implementation of qsort in the C runtime library transitions from quicksort to a (linear) insertion sort when the partition size gets below a certain threshold.
I want to calculate the surface of an area on a sphere including projection effect.
For this I have a scanned image (which results in a 2D disc). On this image I can highlight an area and I have the index of each pixel of that area relative to the center of that disc (x, y coordinates). I also have the radius (in pixels and meters)
How can I calculate the correction needed for each pixel in order to take in account the projection effect (larger at the edges of the disc)?
It's a pretty long solution, so I'll try to be as brief as possible.
1. You need to scale down the sphere to a unit sphere (radius=1)
2. With that you know that x²+y²+z² = 1 and thus z = sqrt(1-x²-y²) and the surface (of a full sphere) is 4*pi*r² = 4*pi, but you only have half the sphere so S=2*pi
3. What you need is the cosine of the angle of the pixel with the Z-axis. This is the z calculated in step 2 (the sqrt) divided by the radius which you reduced to one.
4. In the end you get this formula for the area per pixel: (1/r² * 1/(2*pi*sqrt(1-x²-y²)) ). If you loop through the pixels of the complete area you just need to to sum up all these pixels.
5. This results in a fraction of the surface compared to the sphere surface which you can then convert to any unit you like.
On first sight this "looks" correct [/SOLUTION]