|
Great link. I seldom see a regional wikipedia page offering more or better information than the English one does.
BTW: the work by Myers is what Nick Butler[^] started from when creating his entry for the lean-and-mean competition[^] two years ago. However IMO it is overkill for what PIEBALD is aiming at.
|
|
|
|
|
and a great algorithmician
Well, from the little info provided by PIEBALD, it is hard to figure out what he needs.
|
|
|
|
|
YvesDaoust wrote: a great algorithmician
YvesDaoust wrote: it is hard to figure out what he needs
he wants the codez.
|
|
|
|
|
Shouldn't be a big deal to find source code of a good diff utility.
|
|
|
|
|
I'm not looking for a diff utility, as stated in my post I'm just looking for a more efficient distance calculation.
|
|
|
|
|
Hey! I'm standing right here, I can hear you!
|
|
|
|
|
I'll give it a look later. Merci.
|
|
|
|
|
You can also have a look at
"A faster algorithm computing string edit distances, William J. Masek, Michael S. Paterson, Journal of Computer and System Sciences, Volume 20, Issue 1, February 1980, Pages 18-31."
It has an O(n^2/log(n)) behavior.
There are faster algorithm for approximate Levenshtein distance computation.
|
|
|
|
|
YvesDaoust wrote: for approximate Levenshtein distance computation
Approximate isn't good enough.
|
|
|
|
|
Hi, i'am a researcher in computer vision system (Electronics Engineer by profession) designing a system capable of out performing the current state of the art vision system. OpenCV 2.2 did not impress me, vision by machines seems 2 lag behind the simplest animal u can think of (like a cat or something else). i think computers are powerful enough 2 handle vision nearly as good as humans. Why are the state of art vision systems very task specific and not as robust as they should be?any suggestions?
|
|
|
|
|
Let's see your system and then we can judge.
|
|
|
|
|
Well first i have 2 deal with patent issues plus i'am writing a journal on it, i'have written a proprietary vision library and will be ready 2 show my system 2 the world when all the legal issues are done and when i finalise the design. these legal issues make innovation very difficult
|
|
|
|
|
BCDXBOX360 wrote: i'am designing a system capable of out performing the current state of the art vision system.
BCDXBOX360 wrote: these legal issues make innovation very difficult
You seem to have two conflicting statements here.
|
|
|
|
|
Richard MacCutchan wrote: You seem to have two conflicting statements here.
maybe i was supposed to write that ,legal processes of getting patents and other rights to an invention discourages innovation but does not make it impossible.
|
|
|
|
|
I rather meant that, having claimed that you were going to create a state of the art system that would beat anything currently available, you are now saying that you cannot do it because of the difficulty of getting a patent. That sounds like an excuse not a reason.
|
|
|
|
|
Richard MacCutchan wrote: I rather meant that, having claimed that you were going to create a state of the
art system that would beat anything currently available, you are now saying that
you cannot do it because of the difficulty of getting a patent. That sounds like
an excuse not a reason.
okey lets forget about the patent issues for now, i was initually worried about ideas being stolen, but right now as i write this i'am sitting in front of a lap-top with a vision-library (designed and coded by me) capable of out-performing the current state of the art vision systems. (i'am just optimizing the library and doing some final toughes)
|
|
|
|
|
BCDXBOX360 wrote: i'am sitting in front of a lap-top with a vision-library (designed and coded by me) capable of out-performing the current state of the art vision systems.
In that case I'll go back to my first comment: "Let us see it in action and then we can judge.".
|
|
|
|
|
Richard MacCutchan wrote: In that case I'll go back to my first comment: "Let us see it in action and then
we can judge.".
okey i will make a video presentation as soon as i finish optimizing the library
|
|
|
|
|
BCDXBOX360 wrote: Why are the state of art vision systems very task specific and not as robust as they should be?
Looks like you haven't even started your prestigious project.
You would have seen the fact that with "real" items there are no perfect matches to stored representations. Any animal is capable of recognizing a tree from the data its eyes send to its brain. Write a software which will detect that tree in a bitmap. And then, in a picture of the same tree taken from a different place, and recognize that that's the same tree...
|
|
|
|
|
I was tempted to say that but thought I would give OP the benefit of the doubt.
|
|
|
|
|
yup image recognition is half good dsp and half black magic still
|
|
|
|
|
well i wanted other views from the codeproject community on computer/machine vision algorithm limitations, i have been researching on the current developments in vision systems for 2 years now and have been iteratively refining my design over time based on new and promising heuristics of vision.
|
|
|
|
|
Bernhard Hiller wrote: Write a software which will detect that tree in a bitmap. And then, in a picture
of the same tree taken from a different place, and recognize that that's the
same tree...
it was found that neurons called view-tuned-units exists in animal/human brains that encode only one view of a given object(in this case a tree) and these feed into a view - invariant unit. the principal design criterion for my vision system is based on that same principal, but the secret is to encode those views in time and space (memory) efficient algorithm.simalar to an algorithm by
S. Hinterstoisser, V. Lepetit, S. Ilic, P. Fua, and N. Navab, “Dominant orientation templates for real-time detection of texture-less objects,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010.
they used different views of the same object encoded in a very compact and efficient way but their method works for texture-less objects but is efficient even for a very large database of objects
|
|
|
|
|
This is in response to all your posts.
I gather that you haven't started or are in early stages of your project. I'm also working on some vision recognition stuff but I'm pretty far along and I can tell you you'll find a lot more complications than you realize as you go. That's the reason why a lot of systems are domain specific. It allows them to take advantage of certain known facts and "cheat" so to speak since no one has created a general purpose system yet. In addition to the difficulty that one posters already mentioned here are just a few of the other things you need to consider:
1) Defining the edge of objects: Most objects in the real world will have areas where the edges are blurred rather than sharp color changes. Look up canny edge detection and it will explain some of this stuff.
2) Recognizing 2 areas are part of the same object: Consider a cat with black and white patches. How is a vision system supposed to know that 2 areas with radically different colors are part of the same object.
3) Depth Perception: If you use 2 cameras similar to our 2 eyes you can match 2 objects and then compare the parallax shift. However, this only works at certain distances. Our brains probably only use this at short distances, several other methods are used at long distances where the parallax shift isn't large enough to judge.
Also why are you worried about patents at this stage? I doubt you are going to get sued for simply experimenting with something. If your system does end up working and you want to commericialize it then buy/license the rights from the existing patent holders that are in your way. In addition you may find your idea changes a lot as you work on it and run into difficulties, it did with me.
|
|
|
|
|
mikemarquard wrote: 1) Defining the edge of objects: Most objects in the real world will have areas
where the edges are blurred rather than sharp color changes. Look up canny edge
detection and it will explain some of this stuff. 2) Recognizing 2
areas are part of the same object: Consider a cat with black and white patches.
How is a vision system supposed to know that 2 areas with radically different
colors are part of the same object. 3) Depth Perception: If you use
2 cameras similar to our 2 eyes you can match 2 objects and then compare the
parallax shift. However, this only works at certain distances. Our brains
probably only use this at short distances, several other methods are used at
long distances where the parallax shift isn't large enough to judge.
1) I would agree that my ideas will change in time because they already have, but for the better, at first i started off trying edge detection methods but later on realised that edge detection is not necessary, descriptors such as SIFT,SURF,DOT,HOG and many more use orientation and not contours. This is supported by biological vision in simple and complex cells, my system follows this trend. orientation is not affected by blurring thus more robust and descriptive.
2) My system uses local image patches and a part based recognition infrastructure without segmentation since segmentation is a by-product of recognition then the vision system is not supposed to segment out scenes or potential objects before recognizing them.
3) My system is not currently designed to use stereo cameras it uses a single camera and does not need depth or capturing a 3D representation to aid recognition.
my project as evolved in actual sense and i'am using my on vision library to implement the system and i have figured out how to encode image data in an efficient and robust manner for building a generic object recognition system. How do i know that it will work?well i have been progressively testing simple building blocks of the system and now i'am certain that this will work when the whole system is put together. i am optimizing my vision library for the final implementation and probably months remaining before completion.
|
|
|
|