|
Radix is Latin for root -> Radix Sorting!
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
Excellent, now it makes sense
|
|
|
|
|
It is really called Radix Sorting[^]. Radix is Latin and means root. So it seems to me someone bungled this up and it got "lost in translation".
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
You can read Corman to understand the basics of algorithm.
|
|
|
|
|
Why are you repiying to me?
Are you confused?
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about. I would like to know which of the 2 are dominant
f(n): 10 log n or g(n): log(n^2)
Given the definitions of each of: Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that log(n^2) is equivalent to 2 log n, so 10 log n is dominant.
Could somebody verify or offer an alternative?
|
|
|
|
|
Why don't you draw the graph and see which one is closer to the bottom? Basically, these are just the grading system of the algorithms (used to classify the algorithms based on their complexity), to find out which is better as compared to which is not. I personally use only Big O notation (instead of others, lower bounds, both bounds etc. instead of all of that just consider Big O, and draw the graph). Go here and do that, https://www.desmos.com/calculator[^]. The larger that a function grows with each input, the more time and space that it consumes (time and space might be relative to the data source or the algorithm being used). That is why, a better algorithm is the one that doesn't take much time. In my opinion, log (n2) is a better function — look at the slopes that each of them makes.
Secondly, Quote: which of the 2 are dominant What do you mean by dominant? The better algorithm would have a lower slope as compared to the one with a higher slope (the higher slope one is a bad algorithm depending on the data source).
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
thanks for the link.
I assumed that a "dominant" function is equivalent to f = Ω(g), where f is 10 log n.
So we are in agreement, since conversely g is O(f), where g is log(n^2). I was simply attempting to use a mathematical approach to prove or disprove.
|
|
|
|
|
hello all
i found this task:
"
Given an undirected graph G = (V,E), and an integer k, the cycle-elimination problem is to decide if all the cycles of the graph can be eliminated by deleting at most k edges from the graph. Either show that the problem is NP-complete, or describe a polynomial-time algorithm for it
"
do you have some ideas for the solution ?
shall i reduce the task to Feedback arc set problem ?
Thank you very much
|
|
|
|
|
|
If I want to compute/calculate A*B, then:
In russian peasant's algorithm:
declare and define new helper variable C and P for product, initialized to 0, i.e. all bits are initialized to 0.
NOW:
1. copy A to C
2. Select 1 random bit 1 (whose value/state is 1) of B and left shift C the number of bits that come in right of the selected bit, i.e. if B=0000100, then C should be left shifted 2 bits, because there are 2 bits in right of the single bit 1 of B.
The new bits of C after the left shift are set to 0.
3. Add C to P
4. Turn the selected bit to 0 and if B becomes 0, i.e. all bits are 0, then P is the product of A and B, i.e. P=A*B, otherwise/else repeat all steps, i.e. redo step 1, 2, 3, 4 and etc.
Is that correct/right? If yes, then why CPU/ALU is designed to use Booth's algorithm instead of Russian Peasant's algorithm? In some cases, Russian Peasant's algorithm can be much easy to perform than Booth's algorithm, when most bits of the multiplier B, are 0, and only few are 1.
|
|
|
|
|
That is correct (though usually you wouldn't select a random 1 but a particular 1, eg going from low to high or from high to low, you can use some additional tricks that way). Typical hardware implementations don't use this because it takes a lot of steps. They typically don't really use Booth's algorithm either, but some of the ideas are reused.
Yes it may take very few steps, but it can also take many steps (consider -1 * -1). It's bad enough already that it can take many steps, but the variation is really bad as well - throws a wrench into the usual pipelining and forwarding tricks. I suppose you might pipeline it by unrolling that entire loop, not very pleasant in terms of area, and then it has multiple exits which on top of being generally annoying means you can't do the usual thing of dispatching an instruction the cycle before its inputs will be ready and then rely on forwarding because you don't know when the multiplication ends until it does (ok you kind of could but it sucks).. it's horrible.
So other algorithms were invented, such as the Dadda multiplier and the Wallace tree. The idea (for both of them) is to first compute all pairwise single-bit products, then add them up using as many simple adders as possible (not full word adders, but the 3:2 compressor kind). That works out really well, because the first step (the single bit products) are all completely independent, and then you get a ton of product bits that you can reduce to fewer bits with independent full adders. It takes some number of layers of such adders, and the layers are dependent, but within a layer everything is parallel.
And then you still need a multi-bit adders in the end, where you'd use any fast adder design you like. You only need one of them so go crazy, unlike an unrolled russian peasant multiplication (RPM) where you'd need a metric ton of big adders (you could use ripple carry but then it sucks). So it's smaller and faster than unrolled RPM, and pipelinable unlike iterative RPM.
That's not the end though, for example you can use slightly larger modules that add up 5 bits into 3 in a particular way and build the same sort of layered tree out of those, which is a bit faster. And you can shuffle the connections of those modules around a bit so that you connect slow bits (the modules don't calculate their outputs all at the same time) to "fast inputs" (the delay through a module is not the same for every input) and vice versa to compensate the timings.
Now Booth's encoding, specifically the modified Booth encoding of radix-4 (if you go higher the circuits become annoying, not worth it), can be used to improve that even more, because you get fewer partial products .
Anyway, on a typical modern processor, multiplication takes maybe 3 to 5 times as much time as an addition. You can't match that with RPM unless there are very few bits set in the multiplicand, on average it would lose bit a huge margin.
|
|
|
|
|
Thanks for the quick and detailed answer. Now I am satisfied.
|
|
|
|
|
I am creating a DOORS database script that will be used to get all differences between 2 versions of a particular DOORS table (new vs old). I want to get all differences between the 2 tables (additions, deletions and changes). For anyone who doesn't know about DOORS, when an object is removed from a DB, the program marks the item's attribute as deleted, but you can still see it when you loop through all items. What I want to do is create a function that will display any items that have been changed since the older version.
Here is the algorithm I'm using:
for each newItem in newVers
if (newItem.deleted) {
check if deleted in oldVers
if (!oldVers.deleted) {
}
} else {
if (newItem.id exists in oldVers) {
if (newVers != oldVers) {
}
} else {
}
}
}
Does this look like the way I should do this, or is there a better way? Just to reiterate, I am only looking for what, in the newer version has changed since the older one. I am not worried about changes in the opposite direction.
Any ideas would be appreciated.
Chris
modified 26-Sep-16 9:02am.
|
|
|
|
|
Your approach is accurate, but could be slow. I'm guessing that most of the items are unchanged, so
if (newItem.id exists in oldVers) could be an expensive operation, particularly if id is not indexed.
If your database supports fast sequential access on that field, then there are algorithms that "walk" the old and new, much the way text editors do file differencing.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
I have yet another issue with this function that I didn't think of originally... There are a bunch of attributes that the user can choose from to check for changes and I want to list those attributes, if they've changed, in a new column. However, since the text and heading are attributes that we always check first, I want to skip over those attributes since they will always be in the list. What is the best way to ignore those 2 attributes in the list? My first thought was to include an if-statement within the loop to look for those items. Is this the best option, or is there another way that I'm not thinking of?
Chris
|
|
|
|
|
This is part of my last post here, but I made it much smaller in question.
I'm trying to write a function that will compress items measured by inches.
So say; I have 10 of these
Length = 72
Width = 4
Height = 3
And I package 6 of them into this, leaving the next 4 to be in another package.
Length = 72
Width = 24
Height = 3
I want to further compress them by decreasing the width and increasing the height such as this. I took 2 of them and stacked them taller increasing the height.
Length = 72
Width = 12
Height = 6
I have this, which works in 1 pass, but I'm trying to figure out a way to make multiple passes until there are no more passes to make. But I can't seem to figure out a method in which to do this.
If (item.Width > item.Height) Then
If (item.Qty > 2) Then
item.Width -= p.Width * 3
item.Height += p.Height
End If
End If
|
|
|
|
|
Debug the programme.
I do believe that below line is causing that "it works only in 1 pass"
If (item.Width > item.Height) Then
|
|
|
|
|
Well... i'm pretty sure, you have to rethink your concept of programme...
Imagine, you have a box (container):
Height (3)
_ _ _ _ _ _
/| |
/ | |
/ |_ _ _ _ _ _|
/ / /
/ / /
| / /
| / / Length (72)
|/_ _ _ _ _ _/
Width (24)
In that container 6 smaller boxes(containers {72x4x3}) are there. You want to repack these boxes into container with changed width and height:
Height (6)
_ _ _ _
/| |
/ | |
/ | |
/ | |
/ | |
| |_ _ _ _|
| / /
| / /
| / /
| / / Length (72)
|/_ _ _ _/
Width (12)
Assuming, that you want to find out, if there's possibility to repack these innercontainers, your algorithm have to check, if:
- any dimension of innercontainer does not exceed the size of outercontainer,
- occupied area inside a box (sum of innercontainers areas) is smaller than outercontainer.
Till above condition is met, you can add another box (innercontainer)!
Note, that:
Area (of box) = Width * Length * Height<br />
Occupied area = sum of innercontainers areas.
And finall note: do not change the size of outer-box, create new one and repack all boxes
Try!
|
|
|
|
|
I thought about that.
It would be easier to say will this fit in the space rather than try to create the space.
Not sure If I mentioned this, but the point of the program is to just get a rate from UPS or Fedex by creating virtual packages of dimensional weight, gravity weight, and to not create large package conditions by making them smaller.
Maciej Los wrote: Area (of box) = Width * Length * Height
That's dimensional weight, yes I could just sum up the dWeight till I hit a target.
So If I were to sum up say
3 x 3 x 3 = 27
and I have 10 of them for 270
then 3 x 3 x 30 should equal 270 and equals 270. hmm ...
My question was to adjust the size of 3 x 3 x 30 to 15 x 6 x 3 then try 1 more time.
I should just hire someone to write this for me.
|
|
|
|
|
I should just hire someone to write this for me.
Jim, never give up! There's a ton of members ready to help you (at least me).
Seems, you forgot to use loop!
If (item.Qty>2) Then
Do While (item.Width > item.Height)
item.Width -= p.Width * 3
item.Height += p.Height;
Loop
End If
Am i right?
modified 28-Sep-16 2:03am.
|
|
|
|
|
I have no clue how to paste in my Linked In link, tried many times.
I actually have that sample code above working better now.
If your curious about the program ....
This is the code packaged in a console app in VS2015
There is a class called sample data, that you can remove the remarks to load extra data
In the main class, you can unmark data to load.
It outputs packages in the console. In the real program, those packages will be submitted to UPS for a rate quote, and be packaged in the packages XML section and looped around for a single multi-package quote.
The rules are long items will be shipped seperate
Isolated items are items that will ship in the Original box, slap a sticker on them
Common items are the rest of the stuff that will be tossed in a box
If items trigger the Large Package Indicator, then start a new package to save the customer money.
Ar last, see if we can further combine packages to keep the count down.
Dropbox - RTST Rev 2016092701.zip[^]
I'm not trying to get someone to do my work for me, I'm just having trouble wrapping my head around this.
I use to just add up all the mass until a package is created, then cube the mass into a square. But that failed for long items from 6FT to 8Ft long.
And it failed for isolated items as well.
Thin items are tough, I was doing thin separate, but have now combined them into common.
[edit]
This is a clean copy of the program, in which I started again today. I put the thin items back in and will try to streamline the packaging, see Rev5 class comments.
Dropbox - RTSR_Console 3.zip[^]
modified 28-Sep-16 2:03am.
|
|
|
|
|
|
Thank you for looking at it. I'm working on it today with some new ideas.
|
|
|
|
|
After writing it about 12 times, I came up with this. Works pretty well, but could be better.
Theirs a program called combine_virtual_packages_common that could use better refining.
I was thinking, perhaps I should treat a batch of items like water. Take a box that is 24x17x14 and calculate the item line as water level, and fill the box, say 1 or 2 inches at a time.
Dropbox - RTSR Console 2016093001.zip[^]
|
|
|
|
|