|
Member 4487083 wrote: the compiler will complain
Except when using booleans.
|
|
|
|
|
qualitychecker wrote: More safe and maintainable code:
------------------------------------
bool res = false;
if (null == dt)
else if (null == dt.Rows)
else if (dt.Rows.Count < 0)
else res = (1 == (int)dt.Rows[0]["Number"]);
return res;
------------------------------------
Oops - hate to say this, but if dt.Rows.Count == 0 - i.e. the table is empty - then that code will throw an array-out-of-bounds exception. You meant else if (dt.Rows.Count <= 0) , right?
-- T
|
|
|
|
|
would the same work for a switch statement?
switch (null)
{
case dt:
{} break;
case dt.rows:
{} break;
default:
{}
}
or how about this?
switch (true)
{
case (dt == null):
{} break;
case (dt.rows == null):
{} break;
case (dt.rows[0]["number"] == 0):
{} break;
default:
{} break;
}
maybe nest either one in a do loop till the computer gets its sh*t straight.
please tell me why this is wrong, as i am just learning.
|
|
|
|
|
Thats fine code: logical development (right order) and easy to understand and maintain.
Greetings from Germany
|
|
|
|
|
This code reminds me one developer working under me, according to her every if has to have else, you can't use if alone.
|
|
|
|
|
Those are the people you don't have working for you for long...one would hope.
|
|
|
|
|
Yeah Its been two years. I asked the HR department to improve filtering & recruitment process but as they are looking for cheap labor I don't think this will ever happen.
|
|
|
|
|
That's not cheap though, it's way more expensive in the long run if they aren't competent. More fixing bugs, and longer development time.
And what the heck kind of code would they put in the else if you don't technically need an else?
|
|
|
|
|
GibbleCH wrote: And what the heck kind of code would they put in the else if you don't technically need an else?
I have no idea man....Thank god I am in a different company now
|
|
|
|
|
With thinking like that you'll never make it in HR
|
|
|
|
|
There goes my 7 year plan
|
|
|
|
|
Did your hear about the Zune (aka as Y2k9) bug? There was missing an else.
Greetings from Germany
|
|
|
|
|
There's nothing "horrible" in this code. Industry is all about making something that works and that's easily understandable by your co-workers.
Sure, this is going over the top, and I'd do something that was suggested above, like:
if (dt == NULL)
return false;
if (dt.Rows.Count == 0)
return false;
return (t.Rows[0]["Number"].ToString() == "1");
But, seriously, who cares? It's readable and it will run as fast as
return dt != null && dt.Rows.Count > 0 && (int)t.Rows[0]["Number"] == 1;
|
|
|
|
|
Other than what has been already pointed out..."code trimming" for lesser cycles, I also suggest not to use the string literals as such.
They break the clients when modified.
static readonly string colName = "Number";
static readonly string colVal = "1";
--------------------------
if(dt != null && dt.Rows.Count > 0)
{
return dt.Rows[0][colName].ToString().Equals(colVal);
}
else
return false;
|
|
|
|
|
vnike wrote: They break the clients when modified.
Only if the client is a different assembly. I would imagine that column names and indexes would be private to a class, so I don't see any issue with using const there.
|
|
|
|
|
Our marketing department hired a new web developer, and they were given a task to write some code for our department. Here are a couple excerpts:
namespace OMG
{
static class TranslationClass
{
static ArrayList nonTranslatable = new ArrayList();
public static ArrayList NonTranslatable
{
get
{
nonTranslatable.Add("1st String");
nonTranslatable.Add("2nd String");
nonTranslatable.Add("30th String");
return nonTranslatable;
}
}
private SortedList translationDictionary = new SortedList();
public void TranslateNode(XmlNode node)
{
foreach (string s in NonTranslatable)
{
if (node.Attributes["Publish"].InnerText.Trim().Contains(s))
{
return;
}
}
foreach (DictionaryEntry de in translationDictionary)
{
if (de.Key.ToString().Equals(node.Attributes["Publish"].InnerText.Trim()))
node.Attributes["Publish"].InnerText
= translationDictionary[node.Attributes["Publish"].InnerText.Trim()].ToString();
}
}
}
}
When I asked them to run their code on the sample dataset, I was told they'd need to run it over-night. Looking at their code, I could see why. So I wrote my own version of the utility, and timed them versus each other. Their program took an hour and a quarter to process a 2MB text file; mine took 2 seconds. The biggest problem was that every time that NonTranslatable was used, new, redundant elements were being added to it, and the loop took longer and longer to run. Also they were looping through the dictionary instead of using something like ContainsKey().
Argh!
|
|
|
|
|
Ouch! If you fix those two problems (repetitive additions and looping) how much faster is it?
And how about removing the .ToString and .Equals ?
If you are using .net 3.5, how about if they use a HashSet?
|
|
|
|
|
Or better, if you're going to use 3.5, a List<string> and some linq
|
|
|
|
|
I think Hashset is better than List in this case.
0) No duplicates
1) Hash rather than iterate
|
|
|
|
|
Simply putting an if (nonTranslatable.Count == 0) block around the code in the NonTranslatable property took the run time down from >75 minutes to ~30 seconds. (Fun fact: During the 75 minutes of processing, those repeated Add() calls caused the program to take up about 25MB of extra memory. That's a lot of strings to loop through and compare repeatedly. )
PIEBALDconsult wrote: And how about removing the .ToString and .Equals ?
Not to mention all the calls to Trim() inside the loop. Or does the runtime optimize those out as invariant?
I ended up just dumping all their code and using my own. To add another wrinkle they wrote it in VS 2008, and we are still on 2005 until we get this release out, so I couldn't even build their solution.
Besides there were some other issues. Not only was it still 10x slower than my rewrite, but the output wasn't quite correct. The problem is that there was also a non-XML text file that needed to be parsed - imagine taking a Windows .rc file and extracting all the strings that can be translated from it and putting them into an Excel file. They weren't being careful enough with their regular expressions, so some things were getting missed and some extra things were being found.
Using .net 2.0 and a Dictionary<string,> worked fine for me. By making the regular expressions more correct, I didn't need the NonTranslatable array at all. (The two loops I showed were actually in two separate routines, but I wanted to keep my post relatively compact.)
I've vented - I feel better now.
-- T
|
|
|
|
|
Well, I think marketing are getting exactly what they deserve. The slower they work, the better.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
|
|
|
|
|
Chris Meech wrote: The slower they work, the better.
Because there are soooo maaaany ways to improve! Imagine if customer would get used to "the 2.5h" routine, and after few weeks or even better months they would show him your "2sec" one? WOW!
--
"My software never has bugs. It just develops random features."
|
|
|
|
|
You're doing it wrong. First you give them the 2.5h routine. then you give them a free update that trims it to an even 2 hours. After that you offer to sell them the latest version that drops that time to about 15 minutes. the rest of the speed improvement you save until a later time when you can add it in to make up for the lack of genuine improvements in a release.
I guess I've spent way too much time around marketing executives for my own good.
|
|
|
|
|
I've written some code that turned out to be dog slow when given sufficiently-large data sets. Usually I've tried to fix it before the customer could try it and complain. It's not always easy, though, to know what things are going to pose a problem until one actually codes them. Obviously O(n^2) algorithms are likely to cause problems unless 'n' is bounded to a small value, but it may not be possible to recognize an algorithm which takes time c+kn as an obvious bottleneck without testing it on large data sets. For example, if c==1000k the program may seem to take the same amount of time with n=10 and n=100, but be unacceptably slow with n=1,000,000. It may well be that the n=100 case is much more common, but if the n=1,000,000 case could occur the system needs to be able to deal with it without causing timeouts.
|
|
|
|
|
Yeah, the code is AIDS but maybe it was an afterthought. Everybody writes stupid code once in a while. Maybe the coder just didn't care at the moment.
I didn't get any requirements for the signature
|
|
|
|