|
Automobiles are already considered "computers on wheels" by security experts. Vehicles are filled with dozens of tiny computers known as electronic control units, or ECUs, that require tens of millions of lines of computer code to manage interconnected systems including engines, brakes and navigation as well as lighting, ventilation and entertainment. Cars also use the same wireless technologies that power cell phones and Bluetooth headsets, which makes them vulnerable to remote attacks that are widely known to criminal hackers. Un5@fe at any speed.
|
|
|
|
|
Cold storage is unusual because the focus needs to be singular. How can we deliver the best price per capacity now and continue to reduce it over time? The focus on price over performance, price over latency, price over bandwidth actually made the problem more interesting. With most products and services, it’s usually possible to be the best on at least some dimensions even if not on all. On cold storage, to be successful, the price per capacity target needs to be hit. On Glacier, the entire project was focused on delivering $0.01/GB/Month with high redundancy and security. I'm ready. And you're ready. It's my job. To freeze you.
|
|
|
|
|
It seems every time I come across a story about the Mars Curiosity rover there will be many people commenting on the technology used starting with "Why don't they just..?" and usually pointing out things like: the processor in their smart phone is way faster than the one of Mars, or they have way more memory on their iPad, or their digital camera is way better than the one sending back pictures. These "Why don't they just..?" questions are both annoying and to be expected. Annoying because the underlying thought is "Those NASA/JPL guys are so dumb LOL" and to be expected and encouraged because we wouldn't make any progress without asking questions and, in particular, asking why. Try building one yourself and tell us how easy it is.
|
|
|
|
|
Stuff used in space has always been pretty crude because it has to survive the radiation, and you are not going to normally send a repair man to fix it. Of course we did screw up with the Space Telescope. They try to be extremely conservative in part just because of the problems that kept the space telescope from operating sub par for so many years before they finally sent up a fix.
|
|
|
|
|
Which space telescope do you mean?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
|
Do I miss the days when Nvidia drivers and chipset launches could boost performance by 20-30% across a huge range of applications? Yes. But would I trade them for the data destroying sound card conflicts, substandard driver support, and days when Windows would BSOD if you crossed your eyes at it? Not really. And I like the fact that the computer I built for my parents in 2008 is still “blazing fast” with the addition of an SSD and a bit more RAM, as opposed to needing an all-new system with a new OS installation. If it ain't broke, don't replace it.
|
|
|
|
|
Just wish my laptop lasted long enough for it to become Obsolescent. I also like to put in a clean OS every so often to get rid of all the cob webs that developer in a computer over time. Unfortunatly, I usually end of do this because of some problem rather than on my own terms, and maybe if I did it earlier, I would have been saved loss of data.
|
|
|
|
|
My laptop has been obsolete for some time now. Did a fresh install of Windows 7 not too long ago, and last night I had some bad sectors on my hard drive screw me over. Might have to upgrade to an SSD. Though, I'm thinking of just skipping that and going straight to a MacBook Pro with Retina. One reason is that I recently upgraded my laptop to its maximum supported RAM... a mere 4GB. And because the CPU is old, I can't run VMs. And because the CPU is slow and the video card sucks, I can't play Blu-ray. Oh yes, it may be time for a new machine.
|
|
|
|
|
There are a number of great version control systems out there; the most important thing is to pick one and learn to use it effectively. No matter which source control system you decide to use, there are a number of universal principles that will help you to get the most out of source control. Commit early, commit often.
|
|
|
|
|
Use Branches and Tags
I'll add that creating branches can be great when there are multiple shared environments (e.g., production machines and a shared development machine). It can be pretty annoying when you make a change, toss it on dev, then a coworker does the same but does not include your change. Having a dev branch solves that. And tagging can be great if you really care about your versions (e.g., so you can apply bug fixes to an old version of your software, which would be important to support people with old licenses).
|
|
|
|
|
I thought the same thing when I read the article... I was surprised it didn't mention branching when it talked about not breaking the build.
|
|
|
|
|
Keep files small and focused. Avoid the need for merging. Constant merging is a sign of a poor process.
Choose a system that was designed specifically for software (i.e. not Subversion).
|
|
|
|
|
I've been using SVN successfully for quite a while now... I don't think it's bad at all. I've used Mercurial as well and I can see why people like it as well but having some source control period is better than not having any at all.
|
|
|
|
|
Albert Holguin wrote: I've been using SVN successfully for quite a while now
Many people do. I've been subjected to it for the last two years, at two companies. The team I'm on now is trying to switch to TFS.
So far I've only use two version control systems. The other was a Code Management System -- it therefore has features specific to managing code, whereas Subversion does not have such features.
|
|
|
|
|
I keep seeing developers complaining about different things with the JSON protocol and, don't get me wrong, I've been the first one trying to implement any sort of alternative starting from JSOMON and many others... Well, after so many years of client/server development it's not that I've given up on thinking "something could be better or different", it's just that I have learned many reasons JSON is damn good as it is, and here just a few of these reasons. You know someone's going to start saying how great XML is... just wait for it.
|
|
|
|
|
Terrence Dorsey wrote: You know someone's going to start saying how great XML is... just wait for it.
Alright then (just so you're not disappointed):
XML is better.
|
|
|
|
|
<message>
<response>
<attributes>
<tone>snarky</tone>
<intention>humor</intention>
</attributes>
<whatIWantToSay>
<OKImGonnaSayIt>Oh, really?</OKImGonnaSayIt>
</whatIWantToSay>
</saidIt>
</response>
</message>
Director of Content Development, The Code Project
|
|
|
|
|
Warning: Undeclared namespace
Error: Malformed XML on line 10 - error near '</saidIt>'
Make it work. Then do it better - Andrei Straut
|
|
|
|
|
Humor was lost when I found out you didn't put in a begin tag for
Just kidding...
|
|
|
|
|
Bad code doesn’t have to be a problem, as long as it’s not misbehaving, and nobody pokes their bloody nose in it. Unfortunately, that state of ignorant bliss rarely lasts. A bug will be discovered. A feature requested. A new platform released. Now you have to dig into that horrible mess and try to clean it up. This article offers some humble advice for that unfortunate situation. It was like that when I got here.
|
|
|
|
|
I have worked (maintained) on a number of projects that have bad code. Usually it is something like methods that are thousands of lines. Unfortunately, cleaning up multi-thousand line methods is impratical. All you can do is leave it better than what was started. One group did not like objects because they thought that it took too much time to create and dispose of objects. Also, a most code is done on Windows Forms (or worse ASP.NET), and the environment does not really help. So you get the view muddled with the ViewModel and Model unless you do a lot of design to avoid this. It basically comes down to do no harm.
|
|
|
|
|
Code organization is a huge thing, especially for developers (because they deal with code), and often times it’s a philosophical debate as to how code should be documented, if spaces should be used instead of tabs, what kind of documentation should be used and so on. Yet, what no one brings up is the dire issue of COMMENTING. We can all agree that comments are essential (and sometimes used to build half-ass documentation on big systems) but what no one really mentions is the fact that people are crappy commenters. From syntax use to descriptions. We just suck, really badly and that’s not the way a Zen Developer should be. // To do: write subhead
|
|
|
|
|
If you ever read Martin Fowler's Refactoring book, he has one of his bad smells as comments. I have to agree. I think what is more important than comments is good naming. If the names are good, then there is minimum need for comments. If there was a way to generate good documents from comments I might agree, but the only thing we have is the junk that Microsoft gave us with the first release of Visual Studio, which basically sucks. I would be more interested in something that tells us where naming is inconsistant, and even better, where we need to improve naming. One of the problems with comments is that they are not good if the developer cannot explain what he is doing. Give me good names any day.
|
|
|
|
|
There have been many arguments on whether code should be commented. Here's my experience.
Comments fall into two buckets: Object and method decorations - those that explain what a file, object or class does - and in-code explanatory comments that appear inside methods or blocks of code to add explanations, notes, or to explain the non-intuitive.
Anyone who says that there is no place for comments inside methods is, to me, misguided at best. Code is not a literary work of fiction open to various interpretations. It's a precise series of instructions, and sparing, sensible, well-placed notes on what's going on inside a method can prevent disasters.
There are many, many, many developers and proscribers of dogma that insist that decorative comments are also unnecessary. The standard argument is that names should be clear, descriptive, unambiguous, and as long as necessary.
If we all spoke the same language, had the same cultural background, same experiences, same literary ability, and all wrote code at exactly the same time, using the same, precise naming conventions, then yes, good naming will solve most ills and decorative comments are not that essential.
However, we don't work in this environment and it's extremely short sited, and costly in the long run, to think we do.
A term used in one context may mean something different in another. A trivial example is "Create" which could mean create a new object in memory, or store an existing object in a row in a database.
A term used in one culture may mean something different or, in fact, the opposite in another. To "table" something in North America means "to postpone for consideration". In the UK, Australia and the rest of the English speaking world "to table" means to begin consideration of the topic.
While it's straightforward to use names that are more descriptive it's important to understand that ambiguity is often difficult for a single developer to spot. They know what they mean, but it's only after other developers look at their code that it becomes apparent that other developers may not. Do not fall into the trap of assuming everyone understands what you mean.
One solution is to mandate that names be fully descriptive: CacheObject , UploadToCloudStorage , DiscussIssue . This helps a little, but very soon you hit the point where providing an unambiguous descriptive name stretches the limits of acceptable name lengths. Steve McConnell writes that method names should be between 9 and 15 characters. Good luck.
Still, this doesn't help. No matter how well you name something, how consistent you try to be, how dire your threats are to other devs, you'll always have situations where you just don't know, with absolute certainty, what a method does. With no comments the developer needs to go and read the method to understand what's happening. This is a monumental waste of time, and worse: it's frought with peril when code is read but the intent not understood.
Another issue is parameters. While the same arguments for tight and descriptive method names should be applied to parameters, it's almost impossible to encode in a parameter name things such as restrictions on acceptable input values or notes on special value handling. Comments on parameters allow you to understand the results of suppling null, 0 or empty values, and to understand the limits of what you can supply.
My approach is you should be very, very careful with object and method names, and strive to be descriptive and unambiguous and have as your goal a 95% clarity on naming. That is, 95% of the time a developer reads a method name, that name is clear and unambiguous. However, the list of ambiguous names - that 5% - will vary per developer. That list of ambiguous names may even vary over time for yourself. A simple, clear, well-written, and up-to-date comment will solve this ambiguity.
The "up-to-date" specifier raises the issue of drift. The purpose of a given method may drift slightly from its original intent. The comment attached to that method may then be slightly (or seriously) out of sync with the intent. So too may the method name. To use the argument that comments are useless, and at worst, dangerous because they may not represent what the method does can, and should be applied to method naming as well. When a developer updates a method is it easier for them to make a note of any provisos in the method comment, or is it easier for them to rename the method, and hence the object's API? The method name and the comment should both be kept up to date. Developers get tired and cut corners though.
The way I approach software development is to assume the worst. I assume the inputs to my methods will be bogus. I assume methods will return null. I assume the database will explode in a searing ball of plasma when I run a query. I also assume that my wetware will also have issues and that, at one time or another there will be confusion.
The means that all methods and parameters are commented. This ads approximately a minute of development time to each method. It also adds a small amount of time each time a method is changed to scan the comment and ensure it's consistent. It also means we have a ton of comments that, 95% of the time, add no value. However, since the set of methods that raise ambiguity or clarification issues is non-fixed, it's not practical to simply comment 5% of the code.
While it's tempting to say "just comment the methods that need it", this leads to a slippery slope that we've seen in practice again and again. The test of "what needs it" is carried out by the coder, who almost by definition finds their code clear and unambiguous. One by one "obvious" methods are created without comments and soon we have devs interupting their work and that of the author to discuss what's happening.
The application of under a minute of effort saves 5 minutes of conversation and the inherent costs involved in task switching productive developers.
Comments aren't things that hang around code like bad groupies. They are code, and when the code is updated, so too must the comment.
cheers,
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|