|
That's 'non-sentient' you're thinking of.
Will Rogers never met me.
|
|
|
|
|
It better remain that way. Otherwise ...
ಈ ಪ್ರಶ್ನೆಗೆ ಉತ್ತರ ಏನು?
இந்த கேள்விக்கு பதில் என்ன?
इस सवाल का जवाब क्या है?
...
-- modified 31-Mar-15 21:46pm.
|
|
|
|
|
இது கேள்வி அல்ல..
நெஞ்சு பொறுக்கு திலையே-இந்த
நிலைகெட்ட மனிதரை நினைந்துவிட்டால்
|
|
|
|
|
|
Duncan Edwards Jones wrote: but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else.
One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
|
|
|
|
|
Kevin Marois wrote: One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
#SupportHeForShe If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
Only 2 things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
Don't you mean: How much time do you should spend on bug fixes versus new features?
|
|
|
|
|
Kevin Marois wrote: One good measure of code quality is the level of defects.
I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
Kevin Marois wrote: How much time do you spend on bug fixes versus new features?
Comparing the time spent on fixes to the time spent on new features is like comparing plums to peaches: they may be similar at the core, but it's a different flavor! The skill to find and fix errors is quite different to the one required for designing a new program or function.
I'll give you though that there is a correlation that grows with the experience of the programmer.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Stefan_Lang wrote: I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
I find developers (me included) are quick to blame QA for mistakes and code quality. We need to take a reasonable amount of fault for code that has defects. I know that I have a good idea what the code I'm working is supposed to do and should do my best to account for potential issues that come up. That is what we're paid for.
I find QA is best for double checking work and usability/flow of the software written.
Hogan
|
|
|
|
|
You are mistaking me - I do in no way blame QA when there are many bugs. Every program has bugs, even high quality code. So if there are no bugs (that you know of) it means QA needs to test more!
In other words: lots of bugs is a sign QA is working well. And this applies to both good and bad code. Thus my argument is that the lack of bugs doesn't necessarily imply good code, just that you haven't found those bugs yet.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Kevin Marois wrote: One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
I disagree, neither of these are a good measure. A mudball codebase will usually cost an inordinate amount of time to add new features, even small, simple ones. Fixing bugs may actually be easier (but often cause more bugs elsewhere in the code). Since there's always more bugs than can get fixed, the time spent fixing bugs and adding new features is often dictated by management.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
|
But when calculating WTF/minute you must not forget to account for the possibility of extended recreational breaks after each WTF
(points at himself)
|
|
|
|
|
It the code performs its intended function with efficient, easy-to-understand and maintain code, it's of high quality.
#SupportHeForShe If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
Only 2 things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
Duncan Edwards Jones wrote: but is there any reference for what the range is typical of / acceptable for
real-world applications? Most companies that I worked for would oppose sharing such information, assuming that someone had it.
I find it useless in terms of determining entire applications; but if you take a look at sections of your code, you might find places where it goes up sharply where you might not expect it.
In terms of entire applications the amount of possible paths can go up hugely with much impact; think of adding another addin that saves the current document in "just another format".
You might indeed want to include the bug-count, LOC, avg L/method, amount of types, amount of namespaces, amount of violations of FxCop, amount of compiler-warnings, and profile things such as speed and memory-usage.
That also makes those numbers rather project- and team-related.
Speaking of the subject, would be nice to have some of those calculated for the articles here
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
If your big ball of mud is anywhere close to the size of my big ball of mud I would recommend rebuilding the app from scratch
|
|
|
|
|
Every developer anywhere always wants to build it again from scratch
Actually, I just did for a customers project... I plead guilty
|
|
|
|
|
If you're saying that it could be just my subjective perception that my application is a super sized ball of mud that can't be rescued but only be replaced by a whole new solution - then you're lucky because I won't show you the source because I don't want to be held liable for your mental state
|
|
|
|
|
|
I know there's nothing left to speak of but I fear your family could smell a chance and put the blame on me
|
|
|
|
|
Let me just leave the opinion[^] from Joel Spolsky here shall I.
|
|
|
|
|
Thank you for the link, Jörgen - an interesting read! And it probably applies to a lot of "those cases". If you're not concerned about your peace of mind I'll show you the source of my old program and you will acknowledge that there are exceptions, or, at least one
/Sascha
|
|
|
|
|
As others have said, quality is subjective.
The most objective ways to measure are using the existing tools that have been created:
Memory Leak Testing:
- Valgrind
Performance Tuning:
- Cachegrind
- Callgrind
- The profiling tools in Visual Studio
There are a number of static analysis tools:
Klocwork: You configure the tool with a coding standard, such as JSF++, MISRA, or your own custom rules and it analyzes for potential issues.
Lattix: Evaluates the coupling of the different modules and reports how modularly your code is organized.
Lines of code is useful if you combine that information with other statistics that you maintain, such as the number of defects, the volatility of the code for particular modules and the amount of time a developer spends modifying code in those modules.
Tools can help you identify issues, and sometimes even point towards possible solutions.
However, I have mostly witnessed people expecting to run the tools, and like a magic wand everything is fixed.
Tools cannot fix a social problem.
Ultimately, the value you get from the tools is related to how much time you want to invest in learning them and effectively using them.
|
|
|
|
|
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate.
Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot.
Marc
|
|
|
|
|
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
|
|
|
|