I'm looking for an algorithm to track the numbers at least duplicate content in a two-dimensional array to draw complex objects composed of rectangles, squares, triangles, hexagons, etc.
The constraints are:
- I require at least how many numbers should not be duplicated
- The vertices of the objects must contain at least duplicate numbers except those required
- Objects should be placed symmetrically
In the example you can see duplicate values 0,10,15,18,19,25*,35,45,50 and the three individual numbers. * shared between two objects
The numbers 16,27,46 are the three numbers that I requested unduplicated
Course can be traced different combinations of objects that match the given condition.
They are looking for a fast algorithm. I have processing times biblical!
I have been working with C# using the FlickrNET API in order to create a slideshow that shows images from Flickr based on a single word search term.
This has been easy enough to implement thus far but the images shown on the slideshow are sometimes repetitive i.e. they show 10 of the same thing taken at a different angle by a single user.
As I am a bit of a newby to coding more generally I was looking for general advice or pointers on the best way to randomize these images according to their other related tags. So one way this might work is if someone searched "London" it could show everything with the tag "London" but use the other tags to organize the images so they are more diverse.
9 = 1001 in binary so the function GetNumOfBinary(9) = 2.
I know I can do it in o(n) (time) by convert it to binary and exam digit by digit.
I've been told I can do it using space as much as I need.
How can I do it? (it's seems impossible because I need to check every digit, doesn't matter which way I do it and it'll be still o(n))
It depends on your abstract model. With the usual model, you can't do any better than O(n) (so o(n) is not happening, or did you write a lower case o by accident?) - obviously on a plain old Turing machine, you're going to have to read every bit.
But this problem is in NC. You could sum n/2 pairs of bits, then n/4 pairs of "2bit numbers", n/8 pairs of nibbles, etc, and you're done in log n steps, with each step taking logarithmic time too (adders) (sometimes not counted), all with a polynomial number of processing elements.
Similarly in "broadword computing", you would say that you could compute this in O(log n) broadword steps - using the same construction, but now every layer is a couple of steps (mask and add) (with the addition counted as 1 step, instead of as a circuit of depth O(log n))
Practically, on 32bit words but "pretending 32 is not a constant", you can still use the same construction for an O(log n) algorithm possibly with a multiplication trick to do several sums at once (already shown in answers) or lookup tables or (with as much space as you need, you could cheat terribly and compute any mapping from 32bit integers to anything in a single step), if available, the popcnt instruction.
For arrays you could use a pshufb-based trick[^] (pshufb is awesome).
This is nothing to do with Python, or any other language. It's a simple matter of sorting the intial values into order and then searching for the two points closest to the one entered by the user. Could be speeded up by binary chop (Google for that).
I think Richard was suggesting that you google "Binary chop" not sorting.
Try a search of "python binary search closest value" - the first hit that came up for me (in Google) was an answer to the same homework question
Excuse me, first posted in the lounge, but Bill suggested I post here:
I have a range of values (voltage) over time (thousands of minutes, one value per minute). I am trying to chart these. Determining the length of my Y axis is quite a problem for me. If I take a minimum and maximum, and use that as the axis height, one or two zero values result in all the others being scrunched up at the top of the chart. If I remove zeroes, it looks much better, and for a chart, they aren't very important, I'll give all real values in a tabular report.
What I would like to do is determine the average height of the band of data points, sort of the space between the moving average of the low points and that of the heigh points. I figure to do that, I would need a median series, so I could determine a smoothed series of points above and below median, and make my Y axis 's' higher and 's' lower than those.
How do people normally do this?
Last Visit: 31-Dec-99 18:00 Last Update: 17-May-22 14:16