|
If I would thinking in bitmaps/pixels this might work. But I don't have pixels, I don't have a moveto/lineto/floodfill algorithm, I only have coordinates ... a math approach is what I'm hoping for ... if exists ... however complicate that might be ... On performance I will think after that.
Think of this problem like a "inverse"/"reverse" convex hull problem. There you have to find the smallest polygon which enclose a scene, I need the biggest free one inside the scene (sort of anyway).
|
|
|
|
|
Hi,
third, partial and final attempt:
assume an unknown point px,py is inside the rectangle you want;
if you were to apply the conversion:
x' = 1/(x-px)
y' = 1/(y-py)
then (px,py) would map on infinity, and you would really be looking for a convex hull.
Maybe what you should do is try a couple of (px,py) points in the neighbourhood of your point of interest.
BTW: The problem of course now is your shapes look all different, so it may be more difficult to locate their extreme points.
|
|
|
|
|
I've tested this just now using memory-mapped bitmaps. It works in the meaning that it finds the biggest rectangle (actually all the rectangles, then I chose the fittest) ... But this walking pixel by pixel on a segment (grid cell is 1x1) ... arghh . I still hope there is a math solution to this.
|
|
|
|
|
Hmm,
I am having a problem visualizing the problem. I'm looking at some examples of problems of this type on Wolfram|Alpha:
Pappus graph[^]
The problem in the example seems to have the same characteristics as your problem and it is pure mathematical solution. My conclusion is that you need some sort of graph algorithm.
Can you give a simple example using geometric terms? So I can visualize the problem better? I would like to help...
|
|
|
|
|
Okay I have a veague idea of what you are trying to do. Here is how I would attack it.
Daniel.Sturza wrote: a) have the interior completely free (no segment can cross its boundaries - might touch them thou);
Use the 'Bentley–Ottmann algorithm':
Bentley–Ottmann algorithm[^]
Find the intersections of points, dump these into a point graph (tree graph, using an algorithm which indexes the roots and children, etc.), for each graph that does not cross a segment add to an 'exclusion graph'.
Next Find the max area of space occupied by exclusion graph (interior completely royalty free):
Distance transform[^]
-or-
Shoelace formula[^]
The input would be the exclusion graph's. For each graph in the stack find the largest area, add to an 'area graph'. Here you migh need to also jump to step #b first to find the correct rotation or merge this into one step.
Daniel.Sturza wrote: b) have his sides parallel to Ox and Oy axes;
Rotating calipers algorithm[^]
This algorithm does some rotating, so I'm not sure if it is good for your (Ox, Oy) points. What are Ox,Oy? The algorithm is good for any poly or hull.
Daniel.Sturza wrote: c) have the biggest area;
This should have been found in step a.
Daniel.Sturza wrote: d) have the center closest to a specific point (Xref,Yref);
Jump-and-Walk algorithm[^]
This is a triangulation algorithm, my guess is that it can be adopted to your needs.
Daniel.Sturza wrote: e) have a certain aspect ratio H/W.
No idea what aspect ratio would be. I would be using vector graphics of some sort, so the hull algorithm you are using should do this for you?!?!
Then again this is all brain storming and it might be more like 'day dreaming' Ha,ha,ha. Hope this gives you some ideas...
Another bazaar approach might be to create a Voronoi Diagram of all points, then for each segment connect the points, if the point moves through an area described by the Voronoi Diagram then remove the area from the diagram. After all points have been connected you will be left with areas excluded by the segments and then you can find the max rectangle.
Voronoi Diagram[^]
-And-
Largest empty rectangle[^]
A Fast Algorithm for Finding Maximal Empty Rectangles for Dynamic FPGA Placement[^]
~TheArch
modified on Monday, December 28, 2009 3:31 PM
|
|
|
|
|
How to visualize my problem ... hmm ... if you have some CAD knowledge, you definitively bump into this problem: you have a limited work space crammed with objects. You have to find a location for a new one (kind of). My problem is that I can't touch anything that is already there. It's not an optimization problem (how can I rearrange my objects in placement or in insertion order so the next one will fit) ... it's simply where I can fit it.
The particularity is that the new object it's not yet fully defined : it will grow to the maximum extent that will be available to him (hence the aspect ratio).
Or think simpler: you suddenly have an idea, you want to write it down but you only have an already written piece of paper available. You don't know beforehand how much you need to write down (you'll figure that out on the way), you can write bigger or smaller, but you need to write down the whole idea. For any human this is easy as "hello" ... we don't even think how to do it, but we do it. I am trying to put this in code ...
For now I managed the brake all my objects into pieces defined only by convex polygons (some graphs involved here ) still have to decide what to do with those single floating segments ...
But you gave me a few ideas ... I'm optimistic ...
I'll be checking in when I'll have some results.
Edit : Ox and Oy -> X and Y axes
|
|
|
|
|
why are you using line segments and these complex conditions, a space partition should do the trick ? - at least approximately.
|
|
|
|
|
My simple approach would be a greedy algorithm finding y-monotone polygons, based on an scanline/sweepline technique. By this we should meet condition a) and b). While condition c) d) e) are approximately evaluated in second state of the algorithm with help of score/ranking based on Delaunay-triangulations (q.v. voronoi hint)
Of course it's an Greedy algorithm, but it should perform well in respect that is probably np hard to find the best solution.
|
|
|
|
|
Here is how I would do it:
1) Add line segments to the extents of your workspace
2) Find the intersections of all line segments, and put them into a directed graph as vertices
3) For each line segment, add the endpoint of that line segment to the graph (if not already there)
4) Add bidirectional edges to the graph to indicate line segments that only connect adjacent intersections or endpoints
5) From each line segment endpoint, draw a fictitious segment horizontally until you hit another segment. Add the waypoint at the intersection, as well as a directed edge from the segment endpoint to the intersection, followed by a directed edge from that intersection to the next adjacent point on the line going in both directions.
6) Repeat step #5 in the vertical direction
Whew! OK, now the setup is complete. All you have to do now is the actual work:
1) Find all polygons in the graph. Do this by traversing the nodes by repeatedly taking the adjacent edge in the same direction (ie, at each vertex, find the next edge that is clockwise from the one you arrived from, and take it to the next vertex). If you arrive at a vertex with no outgoing edges, there is no polygon that met your starting criteria, so move on. Otherwise, push the resulting polygon onto a queue/stack.
2) For each polygon in your queue/stack, compute the largest inscribed rectangle. If this rectangle has a larger area than your current largest, swap them out. If they are the same, test your d criteria, followed by your e criteria, to determine which one to use.
Now you have the largest rectangle that meets your criteria (although I don't even want to imagine the asymptotic performance of this algorithm )
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I am engaged in research and development of a new article in a series of articles on algorithms. I plant to use the Big ‘O’ Algorithm Analyzer for .NET featured in my article here at The Code Project.
Big O Algorithm Analyzer for .NET
I recently read the MSDN Article by Dr. James McCaffrey on the QICT pair wise testing utility he wrote for. NET. In his article he encourages the reader to try to use different algorithms for weighting the pairs found by the algorithms using in the tool.
I thought this would be a great application to put under test of the Big ‘O’ tool to demonstrate how to instrument code in an application and additionally show some methods of improving the algorithms for given data sets.
Dr. James’ article can be found here:
Pairwise Testing with QICT
I would like to ask any one who would like to write a different algorithm as described in the article to post it here or discuss different approaches which seem like they would be an asset to the existing code base.
I will also demonstrate the difference of a good Knuth shuffle and a bad on assume the Perl code below:
Bad Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = 0; $i < $length; $i++) {
my $r = int(rand() * $length);
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
Good Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = $length - 1; $i > 0; $i--) {
my $r = int(rand() * ($i + 1));
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
|
|
|
|
|
Okay I'm stumped. Would anyone like to tell me why this is a bad question?
I'm not trolling my article.
I am trying to get ideas to better solve the sorting/complexity algorithm as posed in MSDN.
I got down voted twice on this post. I'd like to know why so in the future I don't make the same mistake.
|
|
|
|
|
As an independent observer, i.e. I know nothing about the QICT Knuth shuffle algorithm, I am as stumped as you are. Your previous post seems quite reasonable to me, so I can only assume the downvoting was by one or more of the many people who seem to stalk the forums downvoting for no reason. I have noticed certain people automatically downvote articles without giving a reason; it's just something we have to live with. I think a question was raised as to tracking these types but Chris commented that it would be too much effort for little gain.
I suggest you go ahead with your article and see what happens.
[edit]I gave you a 5 to try and even the score, let's hope someone else follows suit.[/edit]
|
|
|
|
|
Thanks...
Maybe down voting should require a comment w/ option to make it anonymous to protect the innocent... I will write it up as a suggestion.
I will also down load the code and place the exact algorithm in the thread to better illustrate the problem.
|
|
|
|
|
I don't see why it should be anonymous. If you think something is not up to standard then you should have the courage to put your name and comments behind it.
|
|
|
|
|
Lately I've been re-writing some graph algorithms to operate on adjacency-matrix graphs where the matrix is implemented as an array of ints/longs (for speed reasons)
Algorithms such as determining the size of a strongly connected sub-graph (somewhat like DFS, except the visiting order is different) and perfect graph coloring in chordal graphs.
They demonstrate uses of many bit hacks such as x &= (x - 1) .
Would anyone like to see an article about this?
(and is this the right place to ask this question?)
|
|
|
|
|
harold aptroot wrote: Would anyone like to see an article about this?
(and is this the right place to ask this question?)
yes, sure. and no (Article Writing, Lounge).
|
|
|
|
|
Thanks Luc, next time I'll ask it in the Lounge
|
|
|
|
|
But he's asking if anyone wants an article about this particular algorithm. Not everyone wastes time in The Lounge.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
And how about the bitwise version of Algorithm X? Is that interesting enough?
|
|
|
|
|
A float has only 24 bits that need to be divided, and one of them is "usually 1", so it requires less iterations than a full 32bit number (or even 64, which is a disaster)
Double's shouldn't really be faster than ints though (edit: ok yes it is possible in some cases, but it depends, see below), something else is going on here. Note that the time a division takes also depends on the magnitude of the result for ints, and on the roundness of the divisor for floats. This makes it harder to accurately compare the speed of integer division and floating point division.
As if that isn't bad enough yet there is also SSE vs good ol' FPU, and with Java you really have no clue which one you're getting, so I suggest you use C++ with assembly subroutines to take full control of what you're testing.
Hope this helps
edit2: forgot source, Agner Fog's optimization resources[^]
modified on Monday, December 21, 2009 11:36 AM
|
|
|
|
|
Everything - ok maybe not, but pretty much.
Dividing two doubles may take as much as 21 cycles, and as little as 6. Dividing two 32bit ints may take 18 to 42 cycles, but it's usually on the smaller end of that. Whether it is on average better to divide doubles or ints depends on what kinds of values you have.
And doing any kind of math on denormals/NaN's/infinity's can incur a penalty.
Also, for 64bit numbers idiv is lower than div (up to 30%!)
|
|
|
|
|
On most processors (the high-end of them at least) I have seen the speed for a series of numeric operations would be float >= double > int > long ; there are several reasons for this:
1.
floating-point units are more highly pipelined, which results in a greater latency but a better throughput, so for a series of FP operations, the FPU would be faster than the same operations handled by integers.
2.
integer divisions compute only a few (say 3 or 4) bits per cycle, so long could take twice as long as int.
similar for real divisions, but more bits per cycle.
3.
float operations leave integer functional units available for other things, such as array indexing, iteration counting, etc. which are most always present in code too; as those units can operate in parallel, having more of them working for you should improve overall speed.
BTW: division (and square root) is inherently sequential, much more so than addition and multiplication.
|
|
|
|
|
Floating point division is not particularly pipelined either though, the difference between the latency and the reciprocal throughput of FDIV is only 1 cycle (on a Core2)
|
|
|
|
|
Like we have FlowLayoutPanel, where controls can automatically get laid out according to their sizes and sequence, I have a requirement of a fluid layout.
In a rectangle or area (say 500w X 100h), there are some fixed small controls. lets say a rectangle control of (100w X 30h) is fixed at point (0 top , 150 left) Now, I need to fill this bigger rectangle with small shaped controls with constant areas but variable widths. Height can be maximum wherever possible.
The controls not adjacent to fixed control will be rectangles So, no problems. The controls adjacent to fixed controls will need to take polygon shape of variable heights to fit in the balance space.
Any logic already existing? Any approaches?
And yes, this is for winforms.
|
|
|
|
|
Assume that there is a vertical cylinder with a diameters as follows
1cm at base
3cm at the height of 2cm
2cm at the height of 4cm
1cm at the height of 7cm
etc etc
Now, what would be the height of different colored fluid columns one over the other (Assuming there are no other forces). The columns are on top of each other and movable by drag and drop. Hence the calculation of each column depends on the bottom column.
Its not necessary to consider volumetric analysis because the solution could be in 2D for area also.
The function should take parameters of volume/area of each column at which it starts.
This is required for a graphical application where the movement of a liquid is shown in 2D.
I am looking for a faster alternative to "for" or "while" loops. It needs to be very fast so that a fluid motion could be achieved. Current algorithm is damn slow.
my current solution looks like:
while (volumeToAdjust > 0)
{
volumeCovered = Min (volumeToAdjust, volumePossibleInCurrentWidth)
if(volumeCovered>volumePossibleInCurrentWidth)
height += heightForCurrentWidth
else
height += CalculateHeightForThisVolumeInCurrentWidth
volumeToAdjust -= volumeCovered
}
This solution works fine for a single column of fluid. Multiple columns, multiple widths, the program slows down. Any better ideas?
|
|
|
|
|