Click here to Skip to main content
15,886,110 members

Comments by Jacob F. W. (Top 4 by date)

Jacob F. W. 21-Feb-12 11:12am View    
Deleted
Whoa, my apologies guys, I had edited the tip, but it appears I didn't save it properly. I'll try and answer some of these questions.

1) This is just a tip. This was not meant to be an end all, be all article about performance measurement. I agree that in that case my title may have been misleading. For that, I'm sorry.

2) One thing that I did mention in my edit that didn't make it, was that I was aware of StopWatch, but I used DateTime because I figured it's what amateur programmers would be more familiar with.

3) The 10,000,000 standard is something that I got from doing my own research when trying to find out how many times I have to loop with a function to get an accurate measurement, and I've essentially confirmed it in my own trials over the years.

4) My original idea was how to measure functions that ran so fast, you had no way of easy way of measuring it's speed. As an example, let's say I wrote a function to calculate the Square Root of an Integer, and I wanted to compare it to the standard Square Root function in the Math Library that works of floating point numbers. Both of these functions run very fast, and so you would have to loop them thousands, if not millions of times to get an accurate reading.
ZC123456, I stated in the tip that if the function was slow enough, then you did not have to run it 10,000,000 times. The key was in making sure that the duration of the trial was greater than 1 second, and to make sure you got consistently accurate times after several trials.

5) ZC123456, you mention that sometimes you don't know which function is causing the problem. I agree, but those were not the cases I was concerned with. I was thinking more along the lines of comparing the speeds of two similar, but slightly different functions, like in my Square Root example above. For instance if you make a modification to your function and you want to see if it speeds it up, or slows it down.

6) No one directly mentioned this, but I'll go ahead and say it anyway: I was NOT talking about measuring performance in Big O Notation.

I'm sorry guys, I admit that I did not mention enough information about my intentions, or provide enough examples for this to be a worth while tip, and I'm sorry for the confusion. I've been really busy the last few weeks, and I wrote this off the top of my head before I went to bed one night. I'll work on it in my spare time, and try to get the edited version up next weekend when I have the chance.

Once again, please realize that this was meant to be just a tip, and not a full fledged article. Sorry for the confusion.

Take Care
Jacob
Jacob F. W. 19-Jan-12 0:38am View    
Deleted
Completely agree :)

Thanks for the 5!
Jacob F. W. 19-Jan-12 0:34am View    
Deleted
Excellent!!! I saw several posts about CORDIC functions, but they were all either pseudo-code or the equations for them. I love it when people post actual code!

Tell me though, do you know of any CORDIC implementations that are iterative? I try to avoid recursion whenever possible.

And thanks for the link to Dr. Dobbs: one of the best sites I've seen in a while.
Jacob F. W. 11-Jan-12 14:58pm View    
Deleted
The problem with the Taylor Series is that they converge must slower than I would like. I only use them at all when I have nothing else to use.
Thank you for that link to Wolfram's site. I've used them before but I didn't know about the 'functions' subdomain. Amazing how much stuff they have on there. I found an algorithm on there to find the exp() which converges twice as fast as the Taylor Series.

Thanks!