|
More of a guideline really.
|
|
|
|
|
LOL
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
What's so hard about COL1, COL2, COL3?
/s
|
|
|
|
|
As punishment for that you get sent to Q&A for a week!
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
If they ask for codez now I'll use those variable names!
I'm pretty certain I saw something like those in a BPCS system once... I'm still trying to erase that memory.
|
|
|
|
|
I (and I suspect many other CPers) have inherited databases with cols called Unused253, Unused254, Unused255 all with cryptic codes being used in them
|
|
|
|
|
If I find a col named Unused*, I feel free to delete it.
If I find a file in a temp folder, a file named *.tmp or *.temp, I feel free to delete it. (Most certainly after a reboot when the application creating it certainly is not running.)
Many years ago, I worked with an OS providing temp files in an elegant way: Any nameless file was deleted by the file system when closed. You could change the name of an open file, e.g. from nameless to a random name for handing it over to some other process (file name transferred through a pipe), which might open it and change back to nameless so it would be deleted when it was closed. I think that was a much more elegant solution than all these temp directories everywhere.
|
|
|
|
|
I once worked on a site where they had configured MS-Word options to do autosaves to the TEMP folder. They also had the login startup process (AUTOEXEC.BAT) clear the temp folder. This meant that if your session crashed, your active work would have fairly recently have been saved for reloading; but when you rebooted, the saved data was removed before you could use it.
|
|
|
|
|
There is an old story from the University of Copenhagen, from the days when large data files were stored on magnetic tape, and no computers had a real time clock with battery backup. So if the system crashed (and the Univac 1108 could be crashed by a single instruction!), after reboot you had to manually set the current time. Furthermore, computers were so expensive that university computing centers served sort of like a cloud service to lots of industry and commercial customers.
This huge Univac 1108 mainframe experienced a crash, and was rebooted. The operator handling it typed in the current date and time, not noticing that he had mistyped the year, to ten years into the future. This was not discovered before the cleanup procedure was run that deleted all files that hadn't been accessed for three months ... (This was quite common practice in those days).
That is not the whole story, though: You might think "But the files were stored on magnetic tape, weren't they?" They were, but Univac had defined a format for saving space on the tape and also provide much faster, direct access: All the directory information was directly available online, on disk, without having to search the tape. The tapes held the data blocks. Only. No directory information. The cleanup operation wiped out the directory information on the disk.
So they were left with a million blocks of non-deleted data. but with no information about which block belongs to which file. It was said that for some really important customers, the operators inspected the tapes "by hand" to lay out the puzzle to match blocks together (tape rolls had an ID, and some customers could find that ID from old job listings).
Our University had two similar Univacs, so our computer department had close contact with the Copenhagen guys, and the story was spread to our students. I don't know if it ever became known to media. The actual incident took place before I became a student, some time in the early/mid 1970s.
|
|
|
|
|
We had similar fun with dates:
* When an ICL 1900 booted up, its Manual EXEC asked for the date. The year format was two digits and was based at 1900 (unlike the operating system that took two digits where 65 to 99 were 1965 to 1999, 00 to 64 were 2000 to 2064). One operator entered the full year (e.g. 1982) and the Manual EXEC took the first two digits (19) as meaning 1919. That worked fine - dates for files and exofiles (data on disks not included in the operating system's filestore) were updated with dates representing 1919. When the machine was rebooted again, the next operator correctly entered the year as 82. This caused all of the files to be deleted as they were 63 years old - way outside of their expiration period.
* On an ICL SYSTEM 25, the boot sequence expected (IIRC) a date in the format nn/nn/nnnn. It then prompted the operator with a message like 'Today is Sunday. Is that correct?'. The operator had entered 01/11/1981 which was a Sunday in dd/mm/yyyy format [1st Nov] and was also a Sunday in mm/dd/yyyy format [Jan 11th]. The prompt gave the operator confidence that he had entered the date correctly, but he hadn't used the correct format.
|
|
|
|
|
string MyName => SomeOtherName;
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Don't worry, it's just a temporary thing. Next migration will change the database column names and (dis)order will be restored in universe.
Mircea
|
|
|
|
|
So, I smiled when I read this, thinking of the because it's easy and makes sense answer.
But really, this is a deeper question that depends on what language you are using and what the application interaction with the user is. I'm assuming you're comparing local variable names to the DB columns; and not accessing the actual DB object.
There are cases where this may not be best practice and maybe a case could be made to argue it is not best practice at all.
|
|
|
|
|
I assume you're not using an ORM? All the ORM's I've used (EF, Linq2SQL, Dapper, etc) have the ability to attribute the model with table and column aliases.
|
|
|
|
|
But why one should use different names? Is it not only make the things more confusing?
|
|
|
|
|
Job insurance.
|
|
|
|
|
Because the people that once designed the database gave their columns names like cicmpy (customer info company, or something like that, taken from a very popular financial system).
Or because you're dealing with TextField1, TextField2,... TextField20, DateField1, DateField2,... DateField20, etc. "because the software should be flexible." (I have to admit, the software is flexible and works miraculously well, but their API is horrible).
|
|
|
|
|
0x01AA wrote: But why one should use different names?
Certainly one reason is that databases have absolute length limits. So if you exceed that you are out of luck.
As an example the Oracle column name length limit is 30.
Additionally when database statements are constructed there are length limits on the total length of that.
One of course should not normally run into that. Which is perhaps worse because what happens is that someone uses magical coding APIs without understanding what is actually happening and then when it fails for that one odd ball case then no one can figure out what is happening.
Then there are things that happen over time. Such as a table that has a column named 'total' which even though the column is still named that, what is actually is now is the 'DailyTotal'. Column might be used in one place where in the code the attribute 'DailyTotal' is used in many places. So explaining what it actually is in every place becomes a problem.
|
|
|
|
|
jschell wrote: As an example the Oracle column name length limit is 30. Our company was working on a new coding standard, and the editors proposed a max line length of 80. The problem was that we had rules for how to construct (a certain class of) #define constants, that frequently lead to constant names exceeding 80 chars in length
I tend to associate any system that limits identifier lengths with Fortran II and the 1950s. We should have come further today! (I know that we haven't, at least in quite a few areas.)
I tend to associate any system that won't accept identifiers of distinct, independent objects (such as files, network nodes etc.) containing spaces, æøå or other extended alphabetic character, with Unix / Linux. (I know that the core accepts more than a-z0-9, but that's of no use when *nix applications do not! Linux application naming restrictions is a major reason why I dislike working with Linux.)
|
|
|
|
|
trønderen wrote: I tend to associate any system that won't accept identifiers of distinct, independent objects (such as files, network nodes etc.) containing spaces, æøå or other extended alphabetic character, with Unix / Linux.
So?
Programming is not of course the universe but rather a subset. So limitations must exist.
I suspect the reason for spaces at least goes back to command line usage.
Look at the problem with windows which does allow spaces but which does not recognize case sensitivity.
trønderen wrote: æøå or other extended alphabetic character
Not exactly sure about that one. It was my understanding that if your OS, all the major ones, was properly upgraded to the culture setting then it does support the characters appropriate to the culture. But if I am working in US English then I do not want to see those characters in the file names.
Using a full Unicode character set has performance and data size implications. So although I might create a database field that supports Unicode for a user entered value, the text columns that are specific to the application will not be using Unicode.
|
|
|
|
|
jschell wrote: So limitations must exist. A great argument for any arbitrary limitation (when you don't have any other good justification).
No, forbidding spaces "must" not be a restriction. We have had computer systems allowing it since the 1970s. (I saw it first time in a Xerox document processing system around 1979. Not only did it allow spaces, but file names that were a line long, like "1975-04-12 Letter to Jim Jones" - at that time, *nix file names was limited to 14 characters, DOS to 8+3.) Practically speaking all desktop PCs have had OSs and file systems allowing it for 30+ years. There is nothing in the *nix file systems forbidding it either. The problem is that application writers in *nix environments work under self-imposed restrictions, to make the programming easier for themselves.
I suspect the reason for spaces at least goes back to command line usage. Nobody justifies limitations on programming practices by referring to assembly programming environments, or use of magnetic tape. Why should a programmer refer to a CLI environment as a justification for limitations in a modern screen oriented environment?
Look at the problem with windows which does allow spaces but which does not recognize case sensitivity. I am really happy that the DOS/Windows file system designers made that deliberate design decision. In all my traditional, non-PC, writing, upper and lower case have no semantic difference. A file name is semi-external to the PC itself, it is not internal like a source program but displayed to non-computer-people.
Case sensitivity seems to be another "let us make it simpler for ourselves". Of course it takes a little programming effort to handle case independence (just like with spaces in names), but it gives the end user what is most natural to him/her. Also, comparing for absolute equality with no conversion makes it easier to win the performance race.
But if I am working in US English then I do not want to see those characters in the file names. Will you accept that the Asians have the same attitude towards Latin script in file names? You won't even accept my Norwegian file names and URLs, is that so?
Using a full Unicode character set has performance and data size implications. There you have it again: We cannot provide the user with what (s)he wants/needs, because it would reduce our chances of winning the performance race.
There is no inherent data size limitations (call it 'implications' if you like). Either you handle arbitrary length strings, or you have a limit even for 7-bit ASCII. Besides: Storing English text as UTF-8 has very low space overhead.
Are you aware that Windows does not use 32-bits Unicode internally? It halves the space requirement by using UTF-16. Not too many years ago, every single language for which a written form was defined could fit all their characters into a single UTF-16 code. Having to resort to escapes is a very rare (and quite recent) situation.
the text columns that are specific to the application will not be using Unicode. So you will not prepare for porting your application to other cultures. As you refer to text, even if the other fields are not 'user entered', I guess that they are terms that may be appear e.g. as headings, or as month/weekday names, product categories or whatever. For internationalization, it is not sufficient to handle national characters in user specified strings only.
|
|
|
|
|
trønderen wrote: A great argument for any arbitrary limitation (when you don't have any other good justification).
I am old but not old enough to have worked on some of the very first OSes which then lead to the PC OSes.
I do know that PC/MS-DOS did not allow spaces in names.
I also know that using spaces in command lines (Unix, linux, MS/PC-DOS, early windows) either did not support spaces in arguments at all or required special handling. From that it follows that the possibility exists that file/directory names with spaces would not have been ideal.
trønderen wrote: No, forbidding spaces "must" not be a restriction
I didn't say that.
trønderen wrote: to make the programming easier for themselves.
No idea what your life is like but in my life I work for businesses. And the goal is to produce applications that meet many of the needs and desires of customers. But only when it is cost effective to do so. So even if one customer wants something specific unless they are willing to pay for exactly that (and that almost never happens) and unless it does not unduly impact the maintenance (ongoing cost) and other customers they do not get it.
I have seen not indication that that has not been true for far longer than 30 years.
trønderen wrote: I am really happy that the DOS/Windows file system designers made that deliberate design decision.
Just so you know Microsoft bought (licensed) an operating system when IBM approached them initially for their BASIC language to meet the requested needs of IBM. So certainly not Microsoft's fault.
trønderen wrote: Will you accept that the Asians have the same attitude towards Latin script in file names? You won't even accept my Norwegian file names and URLs, is that so?
That has nothing to do with what I said. Seems like you are even trying to denigrate me.
I have created and supported applications that are used all over the world. In Japan. In Korea. In Europe. Not sure about Norway but certainly in France, Germany, Italy and Finland. Even in Russian although I don't think we actually did an install there.
trønderen wrote: We cannot provide the user with what (s)he wants/needs, because it would reduce our chances of winning the performance race.
Another statement that makes me wonder if you work in business or academics.
Businesses do not exist to meet the needs/desires of customers. They exist to make money. Companies that don't make money do not survive. Every addition, even if it is a need/desire of customer(s) costs money. Both to implement it and to maintain it.
Yes performance is important. It is something that customers always complain about (when complex applications are involved). And it is something that marketing and sales promotes all the time.
trønderen wrote: Not too many years ago, every single language for which a written form was defined could fit all their characters into a single UTF-16 code.
See prior comment. I have written and supported applications that run all over the world. I have led initiatives to make large applications multi-lingual.
So yes I am quite familiar with both what Unicode allows and what it costs to implement, support and maintain.
trønderen wrote: So you will not prepare for porting your application to other cultures
You apparently did not understand what I was referring to.
In large applications there will be character data that is internal to the application. It will never be exposed to users.
It has nothing to do with multi-language support.
There are two other types of character data.
1. Application data which is exposed to customers. Such as a column header on a report. Or a user error message. This is the category where localization of language support is needed and in its easiest form.
2. Customer entered data. Such as a the name of a street or a user comment. This is the category where the concept of localization of language becomes tenuous because something that is entered in France can only be displayed as is in Japan.
|
|
|
|
|
Prefix tables with "tbl" and your code be happy.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Q. What do you call a bear with no teeth?
A. A gummy bear.
|
|
|
|
|
Q. What does a 400lb bear eat?
A. Anything it wants.
I'll just grab my coat and see myself out.
PartsBin an Electronics Part Organizer - An updated version available!
JaxCoder.com
|
|
|
|
|