|
Leslie Sanford wrote: In fact, I can't think of how to access the items in the hashtable by index at all.
A perfect hash would give you O(1) - the hash would be guaranteed to correspond to only one entry in the table. Whether or not a perfect hash is feasible depends a lot on what you're using for keys, as well as how much time is acceptable for calculating the hash.
But i was thinking more along the lines of chained hash tables[^], which are essentially an array of linked lists - this is the technique used by MFC's various CMap classes. Their performance can be very consistent but iteration tends to be rather poor unless table size and hash functions are chosen carefully.
----
It appears that everybody is under the impression that I approve of the documentation. You probably also blame Ken Burns for supporting slavery.
--Raymond Chen on MSDN
|
|
|
|
|
Shog9 wrote: A perfect hash would give you O(1) - the hash would be guaranteed to correspond to only one entry in the table. Whether or not a perfect hash is feasible depends a lot on what you're using for keys, as well as how much time is acceptable for calculating the hash.
Understood. But just to be clear, what I meant by not knowing how to access items in a hashtable by index was that I don't know how you could treat a hashtable as an array. Say you're using keys of a certain type. To get the value associated with the a key in a hashtable, you do this:
Value someValue = hashTable[someKey];
However, say we want the 10th item added to the hashtable. I don't know how we do this:
Value someValue = hashTable[9];
We can't treat the index itself as a key because as items are removed or inserted somewhere at the beginning or middle, we'd have to update all of the subsequent indexes; we're back where we started from.
Shog9 wrote: But i was thinking more along the lines of chained hash tables[^], which are essentially an array of linked lists - this is the technique used by MFC's various CMap classes. Their performance can be very consistent but iteration tends to be rather poor unless table size and hash functions are chosen carefully.
I could see how this would work for simulating an array. Have a linked list with each node containing a linked list. Limit the number of items each list to some arbitrary number like 16. Then if you need to reach the nth item, you can skip over a large number of items at a time. I can see problems with this as items are deleted; you'd wind up with some nodes not having very many items in their list. At any rate, the more I think about this, the more I get the feeling that I'm just reinventing the tree and my brain starts to short circuit.
|
|
|
|
|
Leslie Sanford wrote: But just to be clear, what I meant by not knowing how to access items in a hashtable by index was that I don't know how you could treat a hashtable as an array.
Of course... i didn't read carefully.
I suppose you could still manage this with a hash table, provided the data was known ahead of time... but that's hardly a general-purpose solution. Realistically, i'd probably go with an array + some sort of associative container (hashtable or tree) and just rebuild the array after adding or removing items. But it'd depend a lot on the situation; if constant-time lookups weren't absolutely essential, i might just fall back on iterating to the nth item; in the case of a sorted hash table, this could be optimized by storing the number of items in each bucket (or even an index offset), reducing the operation to iterating to the correct bucket then through that list to the correct item. Honestly can't say that this has been a requirement for anything i've actually done though.
----
It appears that everybody is under the impression that I approve of the documentation. You probably also blame Ken Burns for supporting slavery.
--Raymond Chen on MSDN
|
|
|
|
|
Maybe there is scope for improving things with threading, as time goes on we will have more and more parallel processing available. So why not a list type structure with an associated indexing array, simply of pointers, which updates in a secondary thread. This would be completely transparent to the user, allowing indexed access in O(1) time, unless the primary thread blocks when the user tries to access the array using indexing and the indexing array is being updated. There are clearly some threading issues to be resolved, but it should be possible to come up with a simple clean system. There are probably other types of structures that could benefit from some behind the scenes processing, lists that keep themselves sorted comes to mind. Maybe we should patent this - someone probably has aleady.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I wrote a small class implementing the CollectionBase class that uses a hashtable for the indexes into the list, that are accessed by some key.
The benefit is O(logn) access coupled with an array that can be bound to a combo, list, or grid.
This statement was never false.
|
|
|
|
|
public class HybridList: System.Collections.CollectionBase
{
Hashtable _indicesTable = new Hashtable();
public int IndexOf(object key)
{
if(_indicesTable.ContainsKey(key))
return (int)_indicesTable[key];
else
return -1;
}
public int Add(object key, object val)
{
int index = List.Add(new DictionaryEntry(key, val));
_indicesTable[key] = index;
return index;
}
public void Remove(object key)
{
if(_indicesTable.ContainsKey(key))
{
List.RemoveAt((int)_indicesTable[key]);
_indicesTable.Remove(key);
int i = 0;
foreach(DictionaryEntry entry in List)
_indicesTable[entry.Key] = i++;
}
}
public DictionaryEntry this[int index]
{
get { return ((DictionaryEntry)List[index]); }
}
public object this[object key]
{
get
{
if(_indicesTable.ContainsKey(key))
return ((DictionaryEntry)List[(int)_indicesTable[key]]).Value;
else
return null;
}
}
public ICollection AllKeys
{
get { return _indicesTable.Keys; }
}
protected override void OnClear()
{
base.OnClear();
_indicesTable.Clear();
}
}
This statement was never false.
|
|
|
|
|
Thanks for the source code. The Remove operation makes me a little nervous in that it appears to be an O(n) operation, and that's what I'm trying to get away from. However, if items are rarely removed from the collection, then this looks like an interesting approach.
|
|
|
|
|
Yeah, its definately not perfect. That remove operation was a hasty addition too. But I rarely use it. But having the indices in the table and having them change upon removal of one demands it. It might even be worse than O(n). Maybe even n<super>2.
This statement was never false.
|
|
|
|
|
Sounds like you're describing the good old hash table.
Steve
|
|
|
|
|
Ternary Search Trees[^] are an interesting solution. Boost's Spirit library uses them in its symbol tables.
Steve
|
|
|
|
|
Someone can indicate an algorithm or example of code to find the point of intersection of 2 lines?
Thanks.
|
|
|
|
|
Can U tell me the equation of the line?
Bcos line can be represented in several ways:
1.y=mx+c
2.(y-y1)/(y2-y1)=(x-x1)/(x2-x1)
3.and so on.
Regards,
Arun Kumar.A
|
|
|
|
|
I am a performance analyst. When profiling performance I usually end up with a bunch of numbers that I average and this gives me a rough idea of how a algorithm performs. I occasionally end up with variations which significantly throw out the average, sometimes unfairly.
1.8925
1.8344
4.0433
1.9026
1.7559
1.7898
3.917
1.8873
2.4083
So lets say I end up the above list of numbers, how do I weight the average so it more fairly reflects that most of the numbers are ~1.8 ?
|
|
|
|
|
Ray Kinsella wrote: I occasionally end up with variations which significantly throw out the average, sometimes unfairly.
Yes, outliers do that when dealing with averages. Use the median instead, or remove the outliers (may not be a good idea). You could also use square roots or logarithms to "smooth" the data.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
I was asked to write the multiply table in c, output will be something like :
1 2 3 4 5 6 7
2 4 6 8 ..
....etc
for 12, so I did it like this
for (a=1;a<=12;a++)
{
for (b=1;b<=12;b++)
printf("%d",a*b);
printf("\n");
}
That works, but I was wondering if there exeist another solution to the same problem using only ONE loop ?
|
|
|
|
|
|
Quickly looking at this the closed solution formula is
f(n) = 2^(n-1)
so the loop would be
<br />
for (a=1;a<=12;a++)<br />
{<br />
printf(2^(a-1));<br />
}<br />
F
|
|
|
|
|
Hi,
I have a number of still images containing concentric circles and I would like to be able to detect the centre point of the circles. Are there any algorithms out there that would allow me to do this?
Cheers,
Tony
|
|
|
|
|
the "Generalised Hough transform" may help
havent seen the code tho.
|
|
|
|
|
So if you're not familiar (or comfortable with making yourself familiar) with complex numbers, this may be a little useless, but here is my first guess as to how to do it.
You should be able to find a conformal mapping (look it up on wikipedia, it's basically a function that takes lines to lines and circles to circles, preserving angles, if I remember correctly), and I actually think you want a specific type of conformal mapping called a Mobius Transformation (also look that up on wikipedia - it's a function of the form f(z) = (az + b)/(cz+d), where a,b,c, and d are complex numbers and ad - bc != 0). Moreover, I think the function is just 1/z (where z is a complex number), but you'd want to double check that.
Anyway, there should be a Mobius Transformation that will transform your image so that your concentric circles are mapped to parallel lines, which I would imagine would be much easier to find, especially if the concentric circles are regularly spaced out. From there, you could either find the center point on your mapped image, and then take its inverse under your Mobius Transformation to find the original point, or you could find the lines and map those back through the Mobius Transformation. Then you'd basically have explicit equations for each of the circles that you could use to find your center points.
So check out these Mobius Transformations; if you've got the math background, I suspect they would make your problem much easier. Mobius Transformation followed by the basic Hough Transform referenced above (not the generalized form necessarily -- if you do that, you shouldn't have to do any of this MT stuff) should solve your problem, for instance.
|
|
|
|
|
Some ideas given here seem good, but they will tend to be computationally heavy. My work involves real time image processing in many domains, and in general most algorithms that are called "transform" are usually unacceptably slow (even for modern computers or DSP's). This, of course, depends on wether or not you can distribute the algorithm through various machines. In general I can't.
Anyway, without more details on your case I simply imagine white sheets of paper with concentric circles drawn in them. If this is the case then finding the center is a very fast and efficient operation.
Simply compute the mass center of all "pen" pixels. For example, if they are black then just sum the positions where you find them (keep X and Y separate) and in the end just divide the result by the image size. For example:
mass_center_x=0;
mass_center_y=0;
total_found=0;
for(y=0; y<image_size_y; y++) {
for(x=0; x<image_size_x; x++) {
if (Pixel(x, y)==BLACK) {
mass_center_x+=x;
mass_center_y+=y;
total_found++;
}
}
}
if (total_found>0) {
mass_center_x/=total_found;
mass_center_y/=total_found;
}
At this point the "mass_center_x" and "mass_center_y" contain the coordinates of the center of the concentric circles. This algorithm is very fast because each pixel is analyzed only once, and so runs in an amount of time directly proportional to the number of pixels. Also note that Y is the outer loop so as to exploit the CPU cache in the most efficient manner.
I hope this helps,
Rilhas
-- modified at 8:28 Sunday 20th May, 2007
|
|
|
|
|
I need help to implement the genetic algorithm to find the minimum spanning tree of a graph.(if possible in Matlab)
I have the program to generate different random spanning trees but to implement into the initial population.???
program to generate random spanning tree:
% To make random spanning tree from the adjacency matrix of a graph<br />
% Where A is the adjacency matrix<br />
<br />
<br />
n = length(A);<br />
Ta = sparse(n,n);<br />
comps = [1:n];<br />
<br />
for its = 1:(n-1),<br />
[ii,jj,vv] = find(A);<br />
r = ceil(rand(1)*length(ii));<br />
Ta(ii(r),jj(r)) = 1;<br />
comp = comps(jj(r));<br />
comps(comps == comps(ii(r))) = comp;<br />
nodes = find(comps == comp);<br />
<br />
for n = nodes;<br />
nds = find(A(n,nodes));<br />
A(n,nodes(nds)) = 0;<br />
end<br />
<br />
% A(nodes,nodes) = 0;<br />
end<br />
Ta = Ta + Ta';
an other problem is that the above program give an edge list not a adjacency list so to implement the initial population is not really easy.
If somebody can help me
clem
|
|
|
|
|
|
Unfortunately, i don't want use the Prim's algorithm or Kruskal's algorithm but the a genetic algorithm.
|
|
|
|
|
Why - has it better performance?
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|