|
Yep. Been there, done that, got the scars on my back from self-flagellation for trying it.
Software Zen: delete this;
|
|
|
|
|
That is also one of the mantras I preached when teaching programming. But even though we had been teaching the kids about limited precision, it was very difficut for the to understand that "if ((1/3)*3 == 1)" could fail. (Except that if you really used constants, or compile-time-evaluated expressions, an optimizing compiler might remove the entire "if".)
Students often have a vague understanding of terms like "integer" and "float" (or "real"). So I preferred to refer to them as "counts" and "measurements". That made it a lot easier for them to understand how both integers and floats behave in the computer.
One of the great details of the APL language is the environment variable quadFUZZ (if my memory of the name is correct): When comparing floats, if the difference is less than quadFUZZ, the values are treated as equal. (I belive that the fuzz was actually scaled by the actual float value, so it was a realative, not absolute tolerance, but I am not sure - APL is too long ago!)
|
|
|
|
|
Oddly enough, I can't remember any problems you're talking about from my own experience. And I am not even a programmer by trade, I've studied physics and programming was a side-gig at first.
To me, integer numbers are exact and floats are, as it's impossible to represent arbitrary numbers with discrete values, approximations. They may be good enough for daily use, but they may fail and when they do, they fail. Maybe that's why I didn't have any problems, the concept of approximations is deeply nested in a physicist's mind.
Well, that and I've recently built a system which used integer for it's measurement values (mostly because the sensor returns integers by the value of 0,01°C). So your vocabulary would have spectacularily failed me 
|
|
|
|
|
Students insist that when you measure up 3 kg of flour for your bread, that is count of the number of kilograms. Their body height is a count of centimeters.
It goes the other way, too: They may use a float for the number of students in the class, arguing that when they increase the number by 1.0 for each newcomer, the float still represents the count of students. And, the more advanced ones argue, with a float, you can count any number of units. A plain int can't even count the number of living humans!
Sure, most of these problem come with students who have been playing with computers in their bedroom since they were ten, all self-learned, having picked up one little bit here one there, with no trace of discipline whatsoever. But frequently, these become class heroes: Other students learn "smart tricks" from them, and "how real programmer do it, not the way that silly lecturer tells us to". So they can have a large influence on otherwise "innocent" students.
This is mostly a problem with students. With professional programmers, the problem is with those who do not fully realize that e.g. a comparison does NOT return an integer (-1, 0, 1) but "less", "equal", "greater", and you should NOT compare it to numerical values. If you declare non-numeric, yet ordered, values as an enum, and create an array of, say, weather[january..december], you canNOT index this array with an integer, "because may is the fifth month, I can use 5 as an index... no, wait, I have to use 4, because it is zero based!"
One specfic example: In my own C code, I use to define "ever" as ";;" so that an infinite loop, it is made explicit as "for (ever) {...}" (inspired by the CHILL language, where "for ever" is recognized by the compiler). I used this in one of the code modules I was responsible for at work. It was discovered by one of the young and extremely self-confident programmers, who was immensely provoked by it: He promptly replaced it by the "proper" way of writing an infinite loop: "while(1){..}". He searched through our entire codebase for other places where I had done similar sins, adding a very nasty remark in the SVN log for each and every occurance, requesting that everybody in the future refrain from such inappropriate funnyness - we should do our progamming in a serious manner.
Oh, well - I din't care to argue. Why should I. Readable, easily comprehendable code is more essential when it will be read by people who are not into embedded systems code. Or rahter, to a developer of embedded C code, it is far easier to recognize "while(1)" as an infinite loop than that "for (ever)" for the same thing.
|
|
|
|
|
Don't compare datetimes for equality, either, particularly if they don't all come from the same 'source'.
|
|
|
|
|
That, on the other hand, may work just fine. Depends on the language/runtime library and the source of the dates, but when I want to know if some date is today, equality works.
Again, assuming the runtime library helps and you know what you're doing. There's a reason why TheDailyWTF has a couple stories on mishandling time stamps.
I suppose, we could amend "Don't run home-built date/time handling" to the list of useful mantras. There's heap tons of ways to get them wrong and even if you test all you can, it may fail when a leap year occurs.
|
|
|
|
|
Pure dates are generally not a problem, as long as you (or the coder from whose output you are getting the date) don't/didn't do something very stupid. Even here, however, timezone issues can cause problems. I once had an issue which resulted from the author of the firmware of a device I was getting data from recording what should have been a pure (midnight) datestamp as the corresponding local datetime. Since I am in the GMT-5 timezone, midnight on April 4 became 7 pm on April 3!
Trying to compare datetimes for simultaneity, however, is almost always a severe PITA, when the source clocks are not both perfectly synchronized and using the same basic internal representation for clock time.
|
|
|
|
|
Storing dates in UTC internally solves almost all issues. Not all of them, but almost all. Comparing time stamps in milliseconds for equality may work or may fail, depending on the context. In a scientific context, all clocks involved are precise down to milliseconds at worst and way more precise at best. That, and time differences of a few milliseconds make huge differences.
But it all boils down to context. But yeah, I've seen some very stupid date handling myself. My point is, while comparing floats for equality is a horrible idea by default and always, comparing dates for equality may work very well depending on the circumstances. Well, that and dates are like encryption, there's heaps of way to get it wrong, many of them very subtile but still destructive and only a few (of not only one) ways to get it right.
|
|
|
|
|
- How would I explain this code to the cleaning personnel
- If you can explain how it's done, you can script it
|
|
|
|
|
I used to let them test my stuff. They were really good at finding bugs and breaking stuff because they did unexpected things which was exactly what I wanted.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Another reason to avoid "Yoda conditionals":
if (10 > 5)
if (10.CompareTo(5) > 0)
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
one of those looks like a bug.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
My mantra is "Dammit! Dammit! Dammit!".
Way back when learning C, to help me remember to add the extra equal sign when writing a conditional statement as opposed to an assignment I would say "Equals to" and press the equal sign == for each word and to say "Equals" and only hit the equal = sign once for assigning a value.
It was broke, so I fixed it.
|
|
|
|
|
One of the reasons that I miss Pascal. Both the double == and the paretheses requirements.
|
|
|
|
|
"Developers may come and go, but bugs will stay forever"
|
|
|
|
|
Mine applies to date comparisons.
"Later is greater"
Steady Eddie (for those that never saw "The Hustler")
|
|
|
|
|
"Check the plugs."
Plugs being any of the following:
- connection strings
- method parameters
- app settings
"F5 and pray"
Let-er rip and see what happens. Don't be afraid. 😁 I usually say a short prayer as my coffee is compiling / starting up.
|
|
|
|
|
Don't solve problems which don't actually exist.
Like anything, this must be used in moderation. If there's a good, easy-to-understand abstraction you can implement which isn't required today , but you can see an obvious business case for it coming, go ahead.
At the same time, be careful of overengineering just because you thought of some possible, but unlikely, abstraction.
Design your code in such a way that can be implemented when the problem actually becomes something to solve.
|
|
|
|
|
I have worked in the automation systems business for a long, long time and I found it is almost impossible to over-engineer things there. I wrote almost only because I haven't seen it happen yet.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Well, I more or less always knew that my greatest weakness was my tendency to want certain sections of my code to be bug free. That kind of code-blindness causes extra work when debugging. Through the years I have advanced to the point where I can realize now that I write loads of lines of bugs encapsulating some small bits of perfectly functional code. I'm actually a decent debugger now, I'm still working on the programming part.
|
|
|
|
|
AnotherKen wrote: I'm actually a decent debugger now, I'm still working on the programming part.
Ha! relatable content
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
The REALLY big mistake is that the C# designers carried forward the old C way of reporting something non-numeric as if it were a numeric. IT IS NOT!
The value of comparing A with B is either "A is less", "They are equal", or "B is less", NOT -1, 0 or 1. C# did abandon pointers as integers - "if (pointer)" is not valid; you must test "if (pointer != null)". They should have completed the job!
Every now and then I get so frustrated over this that I write a thin skin for the comparisons, casting those inappropriate integers into an enum. But C# doesn't really treat enums as a proper type, more as synonyms for integers, so it really doesn't do it; it just reduces my frustration to a managable level.
|
|
|
|
|
The problem with that is they may not be 1, 0 or -1.
Any positive value and 1 are going to have to be treated the same, and the same goes for the negative values - they're all -1, basically.
But other than that, yeah.
Although hate enums, because .NET made them slow. I still use them, but they make me frustrated.
So usually in my classes where I don't want to burn extra clocks like my pull parsers I use an int to keep state, and cast it to an enum before the user of my code touches is.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
honey the codewitch wrote: Although hate enums, because .NET made them slow. I still use them, but they make me frustrated. The very first compiler I dug into was the Pascal P4 compiler - those who think "open source" is something that came with Linux are completely wrong. Pascal provides enums as first class types, not something derived from integer.
The compiler source showed very clearly how the compiler treats enums just like integers; it just doesn't mix the two types up, it doensn't allow you to use them interchangably. It is like having intTypeA and intTypeB which are 100% incompatible. If you do casting to (or from) int, it is a pure compile-time thing: It shortcuts the error handling reporting that the types are incompatible. There is nothing that causes more instructions to be executed when you use enums rather than int - not even when you do casting. Why would there be? Why should .net make them slower?
If you have full enum implementation (like that of Pascal) and make more use of it, then there may of course be a few more instructions generated. E.g. if you have a 12-value enum from janurary to december, and define an array with indexes from april to august, then the runtime code must skew the actual index values so that an "april" index is mapped to the base adress of the array, not three elements higher. Index values must be checked against the array declaration: january to march and september to december must generate an exception. But that is extended functionality - if you want that with integer indexes, and the same checking, you would generate a lot more code writing the index scewing and testing as explicit C statements.
Maybe the current C# .net compiler is not doing things "properly" - similar to that Pascal compiler written in the early 1970s. I guess it could. I see no reason why it should be able to, nothing in semantics of C# "semi-enums" making it more difficult that Pascal's full enum implemnentation.
|
|
|
|
|
It depends on what you do with them, but casting them back and forth to int requires a CLI check, i think maybe for invalid values.
Ints don't require that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|