|
Randor wrote: Come on man I know your not really that pedantic about verbiage. Call it whatever you want, "Access Violation" is simply an error title. Now your just trying to be argumentative. I'll call it segmentation fault[^] from now on just to get under your skin.
It is more than just verbiage. By following your rules (which btw really make sense in C90 where you have to declare all variables in the beginning of the function) you will indeed never hit a "null pointer segfault" but segfaults happen on any invalid pointer, not just NULL.
Randor wrote: There are many techniques to making software more robust. Assigning pointers to NULL and checking for NULL is only one of them
At least now we are not claiming that by following these two simple rules we will never segfault
Randor wrote: Initializing a pointer to NULL to denote an invalid memory address is the same magic number technique. It assists the programmer with validating pointers and most certainly assists with making software more robust.
And guess what? I agree. If you initialize a pointer to NULL, and then go to different code paths in which it may or may not be set to point to a valid object, checking it for NULL makes a perfect sense.
However, this thread is about a guy who inserts NULL checks all around the code in the hope it would make it more robust. Well, it won't. If there is a bug in the code, chances are it will not be caught by a NULL check.
|
|
|
|
|
Performance aside, checking for NULL will give you a false sense of security. A bad pointer usually has a non-NULL value anyway. Ie.:
void my_function(MyType* object)
{
delete object;
object = NULL;
}
int main()
{
MyType* object = new MyType;
my_function(object);
if (object)
object->do_something();
}
modified on Saturday, December 20, 2008 6:55 PM
|
|
|
|
|
Protect against what you can anyway.
|
|
|
|
|
PIEBALDconsult wrote: Protect against what you can anyway.
Sorry, not good enough
The only sane way to avoid this kind of problems is to keep the object alive within the scope it is used and have no pointers pointing to it out of that scope. Checking for NULL is helpful only in cases NULL is a valid parameter to a function (meaning - ignore this parameter). As a safety measure, it is completelly worthless - they are billions (on 32-bit systems) possible invalid values for a pointer - why checking for 0 only?
|
|
|
|
|
Because I can, and a null pointer is generally more likely than an invalid pointer.
Again I'll invoke the similarity to a condom; it may not protect against everything, and it may reduce performance somewhat, but it does protect against some specific things.
Do you wear a seat belt? There are some people [weasel words] who argue against them, saying, "what if I drive off a bridge into a lake and drown because I can't undo the seat belt?" I wear a seat belt; a crash is far more likely than falling into a lake.
Locking your car or house is more of a hindrance to you than to a serious thief; do you do it anyway? I do.
If I write a method that takes several pointers, I can check each one and tell the caller exactly which parameter(s) were null, rather than simply blowing up and making the post-mortem team guess what happened.
|
|
|
|
|
PIEBALDconsult wrote: Because I can,
You can also check for 0x00000001 which is also an invalid value on most systems. Then, you can also check for 0x00000002, 0x00000003, and all other values you know are invalid. Again, why is 0 specific?
PIEBALDconsult wrote: and a null pointer is generally more likely than an invalid pointer
I completelly dissagree here. A pointer will have a value NULL only if you explicitly set it to NULL - an unitialized pointer is not going to be NULL, and neither a "dangling" pointer.
PIEBALDconsult wrote: Again I'll invoke the similarity to a condom
Sorry, there is no similarity at all. Condom protects from some but not all dangers. Checking for NULL protects against nothing.
PIEBALDconsult wrote: If I write a method that takes several pointers, I can check each one and tell the caller exactly which parameter(s) were null, rather than simply blowing up and making the post-mortem team guess what happened.
Your check makes sense only if your function takes input pointers that can legally be zero, and then ignore them. As an error detection, it is worthless. If a caller passes a NULL pointer to a function, it means he set it to be NULL; detecting a NULL here makes sense only if the function is documented to allow NULL as an option.
|
|
|
|
|
Nemanja Trifunovic wrote: NULL only if you explicitly set it to NULL - an unitialized pointer is not going to be NULL
C99 and C# initialize pointers (references) to NULL.
Retraction: OK, I misread the C99 spec; I saw, "-- if it has pointer type, it is initialized to a null pointer;" without reading the lead-in, which indicates that that's only true for static, not automatic, storage.
If you don't assign NULL to pointer variables when freed then you're on your own.
|
|
|
|
|
PIEBALDconsult wrote: If you don't assign NULL to pointer variables when freed then you're on your own.
So what if I do? That would not zero any other pointer that point to the deleted object. In my snippet here[^] I did assign the pointer to NULL after deleting it just to point that it doesn't help at all.
|
|
|
|
|
PIEBALDconsult wrote: C99 and C# initialize pointers (references) to NULL
Could you give a link to a reference for C99? I admit I have never heard of this.
|
|
|
|
|
To chime in with the biggest noob problem, at least in the VB.NET forum, what's the default for a newly created, but uninitialized pointer?? I haven't written any C++ code in quite a while, but I believe it was 0.
|
|
|
|
|
Dave Kreskowiak wrote: what's the default for a newly created, but uninitialized pointer??
Whatever it happens to be in that memory location at the time a pointer is defined
|
|
|
|
|
I didn't see it in the spec and I don't have an up-to-date C++ compiler handy.
But I expect that if it isn't NULL already it soon will be.
|
|
|
|
|
PIEBALDconsult wrote: But I expect that if it isn't NULL already it soon will be.
No it won't. The new standard is ready to be adopted and there is nothing about it that would mandate such behavior. None of the compilers I used recently (MS and GNU) automatically initialize local variables.
|
|
|
|
|
|
I am sure you have some point you want to prove here, but it escapes me
If you are saying that checking pointer for NULL is going to make your programs more robusts, I think I already demonstrated you are wrong. You can check for NULL all you want and still have an access violation.
|
|
|
|
|
Nemanja Trifunovic wrote: I think I already demonstrated
No, while your point of view is valid, it carries little weight with us, as ours seem to with you. A program that checks for NULL pointers is (likely) more robust; we're not saying it will never crash, we're just saying it won't crash on something as simple to test as a NULL pointer, or if it does, it should at least give a clearer indication of what went wrong.
Corrie ten Boom[^] didn't save all the Jews in Holland, but she did what she could. Doing nothing because you can't do everything is not a way to go through life.
|
|
|
|
|
But in that example you didn't set the pointer to NULL and you know it.
A called method should not free something that was passed in, or if it's expected to, you'll need double indirection.
Find a better example, that one's a coding horror on its own.
|
|
|
|
|
PIEBALDconsult wrote: But in that example you didn't set the pointer to NULL and you know it.
Of course I did. Just after deleting it. I didn't set other pointers that point to the same object to NULL because it is impossible to do, and that was the point of my sample.
PIEBALDconsult wrote: A called method should not free something that was passed in, or if it's expected to, you'll need double indirection.
Find a better example, that one's a coding horror on its own.
Of course it is a horror - and you can't protect from such horrors by checking if a pointer is NULL. That's all I am trying to point here.
|
|
|
|
|
That really made my day, and I hope the project is not in C
|
|
|
|
|
Finding null pointer risks is not easy at all, even with carefull re reading. I can help finding then (nearly!!) all in C#, VB6 and java. For this risk and lots of others, have a look at http://d.cr.free.fr/indexen.html
|
|
|
|
|
Pointers should be explicitly checked for null if there is a realistic scenario by which they could be null. For example, ptr=malloc(1024); will set ptr to null if the system cann't allocate 1024 bytes for it. If the program isn't allocating much memory, such a scenario may be unlikely but not unrealistic.
On the other hand, in something like:
{
int arr[5];
int *p;
int i;
p=arr;
for (i=0; i<5; i++)
*p++ = i;
} there is no realistic way that p is ever going to be null. It simply can't happen.
|
|
|
|
|
I think it's more a question of, if you write a function (perhaps a library function) that takes one or more pointers, do you check them for null or let them blow up? And why?
|
|
|
|
|
PIEBALDconsult wrote: I think it's more a question of, if you write a function (perhaps a library function) that takes one or more pointers, do you check them for null or let them blow up? And why?
IMHO, the biggest questions would be:
- Is the operation in the null-pointer situation defined by the interface standard?
- Would the null-pointer situation have a logical meaning (e.g. it may be useful for a function that reads data from a stream to have an option to simply throw away some data; allowing the function to take a null pointer for such usage may be more elegant than requiring the use of a separate function)?
- Are there any circumstances that could case a null pointer to be passed in accidentally?
- How would the probable consequence of passing in a null pointer compare with the best result one could achieve?
Incidentally, I found myself annoyed at the design of some TCP libraries which returned the same sort of failure code when a non-blocking write attempt was done on a port whose buffer was full, as when such an attempt was performed on a port that was closed. The full-buffer case needs to be easily distinguishable from the closed-port case, since one will want to wait in the former case but not the latter. In my own libraries, I allow a write to a closed port to immediately return 'success', but then check whether the port is actually open. If the port closed unexpectedly, the data I'm sending will vanish into the aether, but the program won't crash. I may not know how much data vanished in the aether, but oftentimes (1) it won't matter, and (2) it may be impossible to know for certain if some packets gets sent but never acked. A closed port isn't quite the same thing as a null pointer, but I think some of the philosophical arguments are similar.
|
|
|
|
|
Use Assertions as it will be useful to chack for NULL pointers.Debug.Assertin C#
Thanks and Rgds,
VamsiDhar.MBC
SoftwareEngineer.
|
|
|
|
|
You are using pointers???
|
|
|
|