|
Can be super frustrating if they are finding it difficult to grasp.
|
|
|
|
|
then the tech, code, or requirements - IMHO.
|
|
|
|
|
pay attention to separate social and technical issues. Often people mix that.
Press F1 for help or google it.
Greetings from Germany
|
|
|
|
|
... as I am in the lucky situation, that we do reviews, have good and manageable coding guidelines in place, which are _respected by each and every team member_.
So at least, from a ground structure of the solutions/project folders, naming of classes an members, obeying to patterns (events, singletons, factory, etc) we all find code that looks "close to our own code".
We even have code formatting rules in place through android/visual studio settings and all files get "Format File" auto-formatted by there rules, so there is no "member A uses 2-blanks for intendation while others use one tab and the next uses 3-character-tab".
Only the iOS/Apple crew falls out of this scheme as objective-c and xcode are a seperated universe. but ios coding is done through pair programming over large time frames, so each of the devs knows the entire code anyway - they found their way too.
Users... USERS are a problem
|
|
|
|
|
Quote: have good and manageable coding guidelines in place, which are _respected by each and every team member_ Is that really so?
'Good and manageable' means to some people to invent countless rules how to write something and to zealously amend new rules until every corner case is covered. That may be very manageable ( == easy to check), but does nothing much to avoid the real risks in the code. As if pretty writing and formatting would magically make all other problems go away.
And then there are the others who politely play along, because they know what kind of holy war would follow if they questioned the use of these things openly.
|
|
|
|
|
No, it's the matter whether a dev want to "push his own personal style and personality" into the code or if the team agrees to find a uniform way of doing things and stand the personal "but I do it this way!" back.
There are plenty of microsoft guidelines for .net coding, including several patterns, we have code analysis active, warnings are errors and at least the main pitfalls seen through many of the microsoft coding rules are avoided in our programs. yes we have zero warning production builds with code analysis active.
And NO, this is of course not the way to avoid bugs or problems. but that is not the topic here. the topic is "other people's code" and not "how can coding guidelines help you to avoid bugs".
and beside that we just agreed on some schemas and how to's...
it works.
|
|
|
|
|
That's good. I have seen it a few times that everything revolved around more and more rules and enforcing them. Code quality, in every possible sense, was secondary as long as the rules were observed. What a waste of time and work.
|
|
|
|
|
Agreed. I have been once in such a company that put more energy in enforcing their own rules (that often collided with publicly known how-to's than in working code.
Coding guidelines can be a huge waste of time, if done wrong. We agreed on day one, to take the microsoft rules and put our personal thoughts in the second row.
And you know what? Many of us fully agree in the meantime, that they "increased their skills" since the code analysis is in place - as some of the warnings really show hard-to-track-bugs if the subject of the warning comes true.
Our most beloved warning is the "call of a virtual method in the constructor call-chain". We thought about it and it was hard to understand, what microsoft means with that warning. we google'd a bit and found some cases where this indeed can lead to a horrific bug. this is one of the key warnings, where devs felt, that they advanced.
|
|
|
|
|
Wait 'till you inherit a source project from a college intern. Rewrite the entire project from scratch probably faster than trying to float it.
I have no problem maintaining team member's code.
|
|
|
|
|
Depends on who that someone else is, but overall it's absolute horror
I'm quite good with users though, I must be one of few.
My employers even said it in a performance review I had at my last job where I had a lot of customer contact, "You are really very good with customers, they pretty much love you."
Unfortunately, that went on "...We can't say the same about your colleagues"
|
|
|
|
|
Sander Rossel wrote: Depends on who that someone else is, but overall it's absolute horror So you are one of those lone cowboy programmers who can't work together with others?
|
|
|
|
|
I can, but I don't generally enjoy it
Perhaps if I worked with people who didn't write functions with 1000+ lines and 10 nested if-elses...
|
|
|
|
|
Sander Rossel wrote: Perhaps if I worked with people who didn't write functions with 1000+ lines and 10 nested if-elses... Why not? I actually have an (yet unfinished) program where that would have been a good choice, believe it or not.
|
|
|
|
|
I'm going for "not".
Us mere mortals are not made for understanding 1000 lines of code at a time.
Unless, maybe, you are working in some procedural language rather than OO.
But even then...
|
|
|
|
|
Then you will end up with a very maintainable program that has no chance of running on the target system. Clean and useless at the same time.
It's a C program that will run on a small 8 bit processor with only 4k memory. If you follow the rules, this tiny memory is very quickly eaten up just by pushing parameters onto the stack and calling functions. If you can reduce this to a bare minimum, you can save about 30% - 40% of the memory, which then becomes free for additional functionality. That also means writing good old spaghetti code.
Granted, such problems exist only on small microcontrollers, but sometimes bad code has its reasons.
|
|
|
|
|
So that's a scenario where you'll want to use a structured architecture rather an an OO one; or better still a functional model. That's not a big deal, is completely within the "rules", and doesn't need to be implemented as a steaming pile of garbage. But whatever, we all see things best through our individual lenses.
On the other side of the spectrum, i.e. web applications which I know Sander works with, a rigid 1000-line block with 10 nested loops is an exceptionally bad idea, because while that might be dandy for a single instance running on a micro-controller, it will not scale for hundreds or thousands of users at once, and is much more likely to bring the whole system down than almost any OO implementation.
More to the point, that sort of issue is common in the realms where PHP is considered a first-class programming language.
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
Nathan Minier wrote: structured
C is a structured language, but 8 bit processors with limited memory are not well suited for C.
Nathan Minier wrote: functional model Good question wether or not such languages are available, but I would have another alternative: FORTH. Now that's a weird one. It cant make up its mind wether it's a programming language or an operating system or wether it is a compiler or an interpreter. If it were not for the RPN notation, I would say this is a hype waiting to happen for more than 40 years.
By the way, would a functional language not have to depend on an ever growing call stack? With only 4k memory this can quickly result the stack growing down into the code. We don't have any fancy memory paging or protection mechanisms here.
Nathan Minier wrote: doesn't need to be implemented as a steaming pil Our forefathers carefully crafted their programs to squeeze as much as possible into their tiny RAM. With this small scope that was acceptable and still understandable enough. 'Steaming pile' is a little respectless, wouldn't you say?
Today we usually can afford to use a little more memory and the scope has also widened a little, so the traditional methods serve here as a reminder that these things are not carved in stone, either way.
|
|
|
|
|
You're mixing technique with technology. I'm just discussing technique, since that's what this particular thread is about. You can implement structured or functional patterns in ASM if you really want to; it's all in how you decompose the design.
CodeWraith wrote: 'Steaming pile' is a little respectless, wouldn't you say?
No, I wouldn't say that. You're the one that is trying to make the case for un-maintainable spaghetti code and claiming that's how to keep it old school. I think that demonstrates far less respect.
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
Nathan Minier wrote: You're mixing technique with technology. I'm just discussing technique, since that's what this particular thread is about. You can implement structured or functional patterns in ASM if you really want to; it's all in how you decompose the design. That's actually what I was trying to do. Old programs used to be programmed in machine language and tweaked until they performed well enough and at the same time still fit into the available memory.
I wanted to introduce libraries to stop reinventing everything every time and finally have a base for reusing as much code as possible. My first try was still with machine code and had the following results:
1) A function calling convention requires to pass parameters and costs quite a few instructions with every call. On a small processor you quickly have a situation where a good part of your code (30% - 40%) consists of pushing around parameters and calling functions. The result: You have that much less space to actually implement any functions. Also, the stack is used much more than before, even if parameters are passed in registers, this quickly eats up your memory. Recursion is almost out of the question. This gets even worse if a higher language uses the stack to pass the parameters.
2) One word: Math. Doing more complex calculations on a 8 bit processor is a pain. You are more moving bytes into or out of the accumulator than actually calculating something. And it's very hard to read and to maintain. A higher language can help a lot here. Just type what calculation you want to do and leave it to the compiler to make the processor do exactly that.
3) Dynamic memory allocation was something practically unknown in the old days. Practically everything was done in global memory addresses. Global variables, so to say. I'm sure you love them. But where to put the memory block at disposal for allocation? And how much memory? The only place would have been between the end of the code and the end of the stack (which grows down from the highest memory location). How much? Probably nothing, since our code gets longer and longer while the stack grows down further than ever.
So what do you want to do when you have exellent methods which you can't afford because they cost too much memory?
|
|
|
|
|
I hoped you'd say that, because then I can agree!
But we're working in C# ASP.NET for a web application running on a server.
I think we can agree that 1000 line functions are not the way to go in that scenario?
Also, it's not like the 1000 lines are very memory efficient (or efficient in any other way)!
|
|
|
|
|
Sander Rossel wrote: Also, it's not like the 1000 lines are very memory efficient (or efficient in any other way)! Sure they are, you save a lot of memory by not having to call functions, not having to pass parameters and also quite a bit of space on the stack. On a modern system with plenty of memory that does not help very much and does certainly not justify all the disadvantages, but it does have this one advantage (as long as there is no code redundancy).
Sander Rossel wrote: I think we can agree that 1000 line functions are not the way to go in that scenario?
Almost. I have written such functions and with good reasons, but generally I would advise against it.
|
|
|
|
|
CodeWraith wrote: Sander Rossel wrote: Also, it's not like the 1000 lines are very memory efficient (or efficient in any other way)! Sure they are, you save a lot of memory by not having to call functions, not having to pass parameters and also quite a bit of space on the stack. On a modern system with plenty of memory that does not help very much and does certainly not justify all the disadvantages, but it does have this one advantage (as long as there is no code redundancy). I was talking about the 1000 lines of code I have to work with
|
|
|
|
|
Some other poor fool wrote it. Since only we two may have good reasons for doing something like this, this of course is an outrage. Have him tarred, feathered, first thrown out of the guild and subsequently out of the town as well.
|
|
|
|
|
CodeWraith wrote: Sure they are, you save a lot of memory by not having to call functions, not having to pass parameters and also quite a bit of space on the stack. That depends... Frequently, 1000 lines of code will contain like or similar code constructs. Factoring such parts out might save significant code space. Also, data values are frequently used only during certain stages. Factoring out these might save data space as weel.
I work for a company that shiftet from an 8-bit processor (8051) to a 16-bit one (ARM) a few years ago. Not until the porting job was done did it become clear how much code really was there just to overcome the 8 bit limitations. Once the 8-bit problems were gone, we could write far more straightforward and simple code (all in plain C). For identical functionality, the 16-bit code was frequently smaller. (Sure, the ARM 'Thumb' instruction subset plays an important part in that.) Data size did grow after porting, but not that much.
We have never looked back. 8-bit processors may be cheap as chips, but extremely costly in code development and maintenance; you spend a large part of your energy on overcoming the limitations of the processor. Migrating to even the smallest ARMs (like the M0) will give you freedom you never felt before . Those tiny ARMs are also quite cheap, so the cost argument in favor of 8-bit CPUs more or less vanish.
Of course: I am talking about systems that can be organized around a general MPU with on-chip peripherals. There may be uses where the major part of the chip area is some very specialized logic design that would cost a fortune to move to another chip. We did move a good deal of additional logic, at a significant cost, but again: That gave us a great opportunity to clean up the logic design. If you cannot possibly do a similar move ... I feel pity for you...
|
|
|
|
|
That's how I read the OP's comment.
|
|
|
|
|