I have a C# and Silverlight project with two combo boxes (CB1, CB2)
-CB1 is initialized and populated with few items.
(e.g. car, plane, motorcycle)
-User selects an item (e.g. car) and CB2 gets automatically populated
(e.g. model, year, color) via the cb1_SelectionChanged()
-user selects the item from CB2 (e.g. model) and a datagid is populated accordingly.
Once I selected item from CB2 and datagrid is populated correctly, attempting to select a different item from CB1 generates a "Object reference not set to an instance of an object"
To go around just select a different item then back again to the item and that woks but is really annoying. :)
Any ideas will be greatly appreciated
Comboboxes use to send two SelectionChanged events when the user changes the selection: in the first event, the previously selected item is un-selected, at this moment no item is selected, comboBox1.SelectedItem is null, and consequently comboBox1.SelectedItem.ToString(); causes a NullReferenceException.
Then the new item gets selected and SelectedItem has a value again.
Solution: check SelectedItem for null.
I understand you aren't going to be able to give me a specific answer, but I am kind of stunned at this point...
I work on a library that has competitors. Our APIs are pretty similar. I call MyLibrary.MethodA() 1,000,000 times. It takes 1000ms. I call MyCompetitor.MethodA() 1,000,000 times and it takes 150ms. Both in Debug / Any CPU.
I'm trying to figure out where my overhead is. So in MethodA() I tried returning just null at the beginning. That was already 16ms. MethodA calls an internal method which calls another one, etc. Basically, the only thing I'm doing at the top level methods is checking the params for null. Then I lock a dictionary and do a TryGetValue. At this point I start getting to the method that starts doing real work. I'm already at 195ms and am returning null.
How is that even possible? The other guy is returning 150ms and actually doing the work. I'm at 195ms and returning null from an empty method???
For other CPians to help and give you useful suggestions, you have to give more context what the method is doing and what algo it is using and does it access the network to access the DBMS? Does it use LINQ? Does it use reflection? Does it use C++/CLI interop? etc...
Yes, as I said, I haven't gotten to optimizing the algorithm itself yet .
Just calling EMPTY methods in my DLL is slower then calling the other guys FULL method. That's where I'm confused.
His FULL method is 150ms for 1,000,000x.
I'm just calling 4 pretty empty methods. Method1 does nothing but call Method2 (a generic method calling the non generic version). Method2 calls Method3 which just checks that the param is not null and then calls Method4. Method4 locks on a dictionary and calls TryGetValue() (Key = Type, Value = Info class). If it can't find the Info class in the dictionary, it news one up. Now it calls Method5. At this point, I haven't even done the work yet, just getting set up and I'm already at 111ms. The other guy is already done completely at 150ms. That's what I'm puzzled about. Haven't even gotten to any code that I can optimize yet .
He might be caching some data in memory - it could be slower when you call the method only once, but surely speeds up when the method is called a million times.
Are you sure that the competitor's method is thread-safe?
If I understand this correctly, you are calling you method 1 million times and it takes one second. So 1 microsecond to execute. That doesn't sound like a lot to me, but without knowing what it does who knows.
At such speeds, dictionary lookups start to become quite heavy duty operations. I would think locking the dictionary would start to have a bad impact as well (I've got the time 80ns in my head, but that's probably wrong).
Remember that for each look up, the dictionary might call GetHashCode() and Equals() on multiple objects so you need to make sure that your implementations of these things are very efficient.
Bin the dictionary and switch to non-locking synchronization if possible. Also, get Reflector or something on the competitors assembly and see what's happening.
The entire delegate would be executed within the scope of a lock, as with your non-generic dictionary code. However, the lock would be more granular, based on the number of buckets and the concurrency level of the collection.
If you wanted to execute the "other stuff" outside of the lock, your code would look like:
if (!concurrentDict.TryGetValue(theKey, out theValue))
var theNewObject = NewUpAnObject(theKey);
theValue = concurrentDict.GetOrAdd(theKey, theNewObject);
// Or:// theValue = concurrentDict.GetOrAdd(theKey, key => NewUpAnObject(key));
In this case, DoSomeOtherStuff, DoAFewMoreThings and NewUpAnObject would run outside of any lock. However, the insert to the dictionary would not suffer from a race condition, as the class is specifically written to cope with this type of code.
"These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer
There are many, many areas that could contribute. Your competitor might delegate all calls to background threads, for instance, giving the appearance that they are processing quickly. They might be using highly optimised native code behind the scenes (after all, just because you've compiled to debug, it doesn't mean your competitor has shipped you a debug version of their code, so you're comparing apples with the Mona Lisa).
You mean compiling it to x86 or x64 rather then Any CPU? It just seems like I've got overhead calling empty functions. I'm not sure they can kick off background threads as they need to return an object from the method.
No, I mean that they might be using C++ (for instance). Also, if you compile your code to release version and then compare against theirs, you should get more realistic timings. I guarantee you that their version will be the release version.