|
|
Sometimes i don't know what is it, ut it seems interesting, The Code Coverage Tool is a tool, Ok nvm.
|
|
|
|
|
Used to utilize Bullseye a superb software years ago when it wasn't absurdly expensive
|
|
|
|
|
Unit testing and Sonarqube are free
|
|
|
|
|
We have to write tests and pass Sonarqube analysis at work.
|
|
|
|
|
We used to have Sonarqube at a previous employer.
People mostly ignored it, but sometimes they didn't.
So this one time, Sonarqube said "guys, you should really update this JavaScript code to use the newest ES6 syntax."
Most of us know not to update JavaScript too soon (or do browser sniffing, have shims, etc.), but one of my coworkers didn't (and to be fair, they probably all didn't ).
Guess who's code broke in production due to unsupported syntax in various browsers?
All in all I liked Sonarqube though, it gives some good tips and sometimes it even told me stuff I didn't know yet
|
|
|
|
|
He probably missed some polyfills.
Yeah, Sonarqube can have some good suggestions, but also some really silly nonsense ones imo. For the project I used to work on we basically had to pass all the bug and security measures. For the new one we need to have 80% code coverage, but like you said, coverage % alone doesn't mean that the tests are any good. I wonder if there is a measure for the test quality?
|
|
|
|
|
Setting a coverage target is quite useless IMHO, besides some corner cases like safety software (Do178 anyone ?).
The interesting part is the uncovered code :
- it can show insufficient testing (expand your test cases)
- in legacy systems, it often points to useless/redundant/dead code, and shows that some cleanup is required (some times up to the specs !)
|
|
|
|
|
Just making sure a test touches some code increases coverage, but it says nothing about the quality of the test(s).
I once worked for a company that had a high coverage, but their tests consisted of calling code and expecting an exception, as some object was always null during tests
|
|
|
|
|
I tend to agree...
I fell in love with the "Code Complete" approach of adding a breakpoint at the start of newly written logic.
And "watching" it execute "in context"... I can't fathom NOT doing that.
The less certain I am on the coding approach... The more breakpoints I will drop. Sometimes just to ensure I've tested multiple paths.
But coverage for coverage's sake... Not interested.
In fact, I prefer code reviews over the "sh!t storm" of testing. Where the complexity of testing/stubs is beyond that of the application complexity by a margin. Sometimes I wonder if developers are just trying to be cute. And I've seen some horrendous production code that was design to be testable... They were running the tests... But when you need 12 lines of code to call a method so you can pass everything in constructed properly... So it can be tested in a test environment, or without a real DB, etc... It just feels "icky"...
|
|
|
|
|
Kirk 10389821 wrote: I fell in love with the "Code Complete" approach of adding a breakpoint at the start of newly written logic.
And "watching" it execute "in context"... I can't fathom NOT doing that. I haven't read that book, but I do that too
Kirk 10389821 wrote: Where the complexity of testing/stubs is beyond that of the application complexity by a margin. Been there, done that
Had to create complete database objects with 30+ fields just to see some code not outright crash.
Then create that same object again (make a function that returns said object), but instead of property X having value Y use value Z because that should give a different result.
Oh, and don't forget to stub IServices A, B and C, which should all return specific values.
We know A, B and C work because we've tested that elsewhere.
Kirk 10389821 wrote: And I've seen some horrendous production code that was design to be testable Yeah, me too.
Make everything public so you can test it.
I don't want to make everything public, only the interface members.
But the interface does too much to test.
Well, guess I'm not testing it then...
But seriously, when I have an IFileService and an AzureFileService implementation, the only real test I can do is run it and see if my file ends up in Azure.
No amount of stubbing or mocking is ever going to compete with that!
"Yeah, but you can test that the method is calling the Microsoft.Azure.BlobClient.WriteBlobAsync(byte[], string, string) at least once."
Like, seriously, WHAT!? If that's the kind of testing you need you have bigger problems than testing
I've found "common sense"™ to be the best testing, debugging and coverage tool of all
Too bad it's in such short supply these days
|
|
|
|
|
Isn't one of the primary uses of C# partial classes to put test code into a partial class, with access to non-public members?
Partial classes certainly have other uses as well, but when I first learned about them, I immediately took them into use for test code, long before any other use (with the obvious exception where VS/WPF generates skeletons that you fill in - that is not my use of partial classes).
|
|
|
|
|
I have bad news for you, but no, absolutely not.
Partial classes can't be defined across assemblies.
You're basically creating a single class, but split up over multiple files, so those files need to be in the same assembly.
If you're using one of those files for testing, that means your test code is running in your production code as well.
So let's say you've got a partial Person class with FirstName and LastName properties and a GetFullName() function.
Now you create a partial class in a Person.Test.cs file in the same assembly and add a test called TestGetFullName_ShouldReturnFullName().
Your public API for your Person class is now FirstName, LastName, GetFullName() and TestGetFullName_ShouldReturnFullName().
I'm pretty sure you don't want that
Unless I'm missing something?
|
|
|
|
|
Sander Rossel wrote: Partial classes can't be defined across assemblies. You're basically creating a single class, but split up over multiple files, so those files need to be in the same assembly. So? In the days of executable files, rather than assemblies, the test code resided in the same executable as the application code. No difference.
In one of my earlier jobs, we had extensive discussions whether to remove test code from the system to be released, or to leave it in there. The tradition was to leave it in: When a customer reported a problem, we could ask him to turn on the switch generating test logs for analyzing the error. This had been very useful at several earlier occasions, and we decided to keep it in, even though it significantly increased total code size. After that, we made use of the test code several times while I was still with the company.
Now there is test code and test code. You obviously do not ship to the customer procedures for testing hundreds of borderline cases, or the complete cartesian product of the values of five different parameters. What you ship to the customer is test procedures for environmental conditions (including file system and network), verifying data structure consistency etc.
If you decide to leave parts of (or all) the test code out of the distribution, then you leave out that source file from the assembly and rebuild. True enough: You are then delivering a different binary from that where the complete set of tests were made (which was one of the concerns that made us leave the test code in). This is more relevant in a language where you manipulate pointers and memory allocation directly; pointer problems are far less frequent in C# than in e.g. plain C. The great majority of code bugs occur at the logical level, independent of physical location and addresses, and will be detected even if the binaries are not perfectly identical. (No module testing is done in a distribution binary!)
So, rather than making everything public so you can test it, as was suggested, you add a partial class (possibly with partial methods, if the test strategy requires the application code to call into the test procedures - but you would usually try to avoid that). That will give test procedures - and noone else - controlled access to private elements they need, and the test procedures do not gain access to 'everything else'.
For returning to the subject line contents: Especially with interpreted languages that are not even syntax checked at build time, coverage is essential. I have, both in code I wrote myself and in libraries developed by co-workers, had fatal crashes due to syntax errors ... in error handling procedures. Making sure that your tests provoke all possible errors (and the impossible ones as well) in all flavors so that the entire error handling has been at least syntactically checked may be quite difficult and you may have to resort to mockup errors, but do make sure that you test all error handling procedures thoroughly!
Coverage tools sometimes give you surprises: If you are not familiar with them, you might not believe the figures they report, the first time you use them. Most of us has a lot of code that has never been tested. For compiled languages, you at least can assume that the syntax is correct, but you forget to test all the 'else' clauses, several of the switch alternatives, etc.
So I strongly disagree with the subject line.
|
|
|
|
|
trønderen wrote: test procedures for environmental conditions (including file system and network), verifying data structure consistency etc. Never wrote those.
They don't sound like technical tests though and more like a utility that tests whether a user's system is running well.
I don't really see why you need partial classes for such tests though.
trønderen wrote: procedures for testing hundreds of borderline cases, or the complete cartesian product of the values of five different parameters These are (unit) tests in my book and the test code I was talking about that you don't put in partial classes.
You don't ship unit tests to customers.
It would clutter up the public API with lots of weird methods.
trønderen wrote: For returning to the subject line contents: Especially with interpreted languages that are not even syntax checked at build time, coverage is essential. I'd say linters do a better job at checking syntax.
You shouldn't have 100% coverage just to check syntax (and even then, it's not checking syntax, it's just checking that syntax doesn't cause errors).
trønderen wrote: Coverage tools sometimes give you surprises: If you are not familiar with them, you might not believe the figures they report, the first time you use them. Most of us has a lot of code that has never been tested. For compiled languages, you at least can assume that the syntax is correct, but you forget to test all the 'else' clauses, several of the switch alternatives, etc. I agree with you there.
If your tests make sense it can add some insights.
I don't think coverage is a pretty good metric on itself though.
Very often, you don't need 100% coverage.
My coverage is probably about 0.01%, as I don't test by default and only add tests when I think some code could easily break or is difficult to test otherwise.
The methods I test do have a 100% coverage though.
Basically, I've gone from a "test unless..." to a "test if..." approach and in that scenario, too, coverage is a useless metric.
|
|
|
|
|
Sander,
Thanks for the details and confirmation bias on my part. LOL
I enjoy the details in your answers... Especially since I think we are cut from the same cloth in many ways.
(that just means your probably old... LOL)
|
|
|
|
|
Kirk 10389821 wrote: I enjoy the details in your answers I like to get my story straight
Kirk 10389821 wrote: that just means your probably old... LOL Only 34, but it does feel old sometimes
|
|
|
|
|
LOL,
I was programming professionally while I was in High School. I was SHOCKED people got paid to do this! LOL
That said, at 55 I am slowly migrating into Project/Team Management. This is a young mans sport at some point.
I didn't start feeling it at your age... But about my 40s, I could tell I turned a corner... I just couldn't GROK
the new stuff as quickly.
|
|
|
|
|
Kirk 10389821 wrote: I didn't start feeling it at your age... [...] I just couldn't GROK the new stuff as quickly. Don't get me wrong, I grok it all, I just don't agree with it
That elderly feeling has nothing to do with my programming prowess, which is as strong as ever, and everything to do with my neck, my back, feeling tired, wanting those darn kids to get off my lawn...
|
|
|
|
|
No, but I would like to get it incorporated into the CI/CD flow.
|
|
|
|
|
|
When I work for a company that mandates them, I do run them. Didn't notice any impact on the quality of the released software, except that in a couple of rare cases CC tools discovered some dead code that we were able to remove.
|
|
|
|
|
|
Didn't know what was meant by Code Coverage Tools.
Now that I do I realise that they are not required where I work, as it would appear that our "IT Development Partners" (i.e. outsourced) do not actually bother to test
Me bitter? Surely not!
|
|
|
|
|
No code coverage, no SPICE level, no automotive products.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|