|
Paul Watson wrote:
We should not be creating applications. We should be building components
I think I first read that line back in about 1985. Component development seems to be something that each new generation of programmers thinks it has discovered first. Not that I disagree with you. I firmly believe in breaking any programming problem down into fine granularity and making each piece as reusable as possible. The problem, though, is that it takes a hell of a lot of time to do. And even after you do it, each component has to be something everyone agrees on or else everyone sets down and writes there own version of the component with none of them being actually resuable by anyone except the guy that wrote it. Face it, all of us programmers want to be the guy famous for writting a specific widely used component, or a better version of a specific widely used component. The entire paradigm quickly breaks down without a great deal of managerial oversight. And who is going to provide that?
Paul Watson wrote:
People say the desktop is dead or dying or not designed to handle our modern needs.
Yeah, I keep hearing that also. What I don't understand is why my desktop apps seem to run about a billion times more efficiently than even the most well designed web based app. (and also have a more streamlined and user friendly UI).
"There's a slew of slip 'twixt cup and lip"
|
|
|
|
|
Stan Shannon wrote:
I think I first read that line back in about 1985.
As I said, my idea is anything but new.
The problem is while the idea is there, the tools are there and the need is there, nobody is actually doing much about it.
Stan Shannon wrote:
or a better version of a specific widely used component.
That could be the way forward. However that sounds like something open source is tailored for, lets see everyone adopt that.
The only stab in the dark that I can think of is to start the standard with the central data component, i.e. the bit that I think all other components should cluster around. Make that open source, make it an organisation like XML etc. and initially start it with standard interfaces and features. Then keep control of that component, while the components which modify the data can be be closed source and proprietary as hell, as long as they work with the data component.
I definitley don't think all the "other" components which work on the data component should either be standard or have one version only. Hell, there can be ten different companies all creating a text formatting component, all different but all have to work with the data component to format the data. Without that kind of freedom you would have a one component vendor world and that would just kill everything.
Oh, and people should follow and use COM properly. From what I hear a big reason why COM was not quite the revelation everyone thought it would become is because people did not follow the "guidelines" and "recommendations" of the loosely defined COM standard. e.g. They made sure it had iUnknown and that was about it, the rest they did however they felt and then the different components looked at each other blankly trying to shake hands but finding one has twelve fingers and a big toe while the other has no fingers and huge tentacle.
All in all, my current line of thought is: Central standard data "component" which other components "cluster" around to modify.
Stan Shannon wrote:
Yeah, I keep hearing that also. What I don't understand is why my desktop apps seem to run about a billion times more efficiently than even the most well designed web based app. (and also have a more streamlined and user friendly UI).
MHO is that the browser will fade away and instead the internet connectivity of windows/nix/etc. apps will grow.
It is not really the browser which was such a killer app, it was the ability to communicate over the internet via URLs, to have that one big standard network which could be navigated in one protocol, HTTP.
And as you say, the average windows app is far more usable and responsive than any web app.
Stan Shannon wrote:
And who is going to provide that?
The Illuminati...
But seriously, something like the W3C would have to grow and people would need to follow it.
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
Paul Watson wrote:
MHO is that the browser will fade away and instead the internet connectivity of windows/nix/etc. apps will grow.
That is certainly my hope. The browser has its place, but provides a lowest common denominator approach to application development. If used judiciously, which is unlikely, web services technology could become an important tool for the seemless integration of the desktop with the web. (of course there are any number of technologies which need to improve for that to become practical - say 'cheap broadband'????)
"There's a slew of slip 'twixt cup and lip"
|
|
|
|
|
A) Not every app needs "components". (see item D)
B) Not every app is/should be net-enabled. I've NEVER been involved in a C++ project that would directly benefit (in terms of reliability or functionality) from being net-enabled.
C) Not every app is/needs a database. I've only written/partiticpated in ONE project that uses a database.
D) (For Mike Butler) COM *is* difficult to use, especially given that close to 80% of all components written have no./little/piss-poor documentation, I'm not surprised it hasn't "taken off". COM in and of itself is only beneficial in a limited number of instances, almost NONE of which involve writing applications.
Online updates are fine for some folks, but depending on the manufacturer, I'd rather do without. I put absolutely NO trust in Microsoft, and despise the fact that I have to install their crap directly off the web, or that net access is required to install something. That's just plain bullshit.
Apps that are designed FROM THE START to be net/web-enabled may be fine, but trying to shoehorn in the same capability into an app that has no valid reason for supporting it is a severe mis-use of programmers' time. They (management) wanted to find a reason to web-enable an app I worked on for 12 years. I was none too shy about letting management know that it would be a major waste of time to do so, because A) I didn't see the need, and B) our users would just say "Huh?", and D) we would have to provide servers and guarantee access beyond our current ability to do so, and D) they couldn't come up with a cohesive list of features that would benefit from the added code.
It's all a crock of do-do doodie.
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
|
|
|
|
|
COM in and of itself is only beneficial in a limited number of instances, almost NONE of which involve writing applications.
?????
COM is just a fancy method of defining virtual functions and contracts between software. It is used heavily in industrial automation.
To say that COM has limited applications is like saying virtual functions have limited applications.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
COM is only useful when a component needs to be used by several appolications written in a variety of languages, or for distributed applications (such as industrial automation - a VERY vertical niche for software development).
Other than those two examples, I honestly can't think of any reason to implement a series of COM components (or even DLL's if the code isn't being shared between apps).
I've seen COM used way too many times just for the sake of using COM, followed by the mantra "we can't use MFC because it requires too much overhead". Bollocks. COM killed a devbelopment project because it made the app overly complex - that's right - only one program was going to use the code, but we had over 40 COM components. Can you say "maintenance nightmare"?
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
|
|
|
|
|
Tim Smith wrote:
COM is just a fancy method of defining virtual functions and contracts between software. It is used heavily in industrial automation.
To say that COM has limited applications is like saying virtual functions have limited applications.
Maybe I've lost sight of the original purpose, but isn't the rational for COM essentially to make code binarily (if thats a word) compatible across programming languages? With COM I can write a chunk of code and use it from C++, VB, Java or whatever. I agree with John that most application have a very limited need for that. Basically I think that COM, though important in some areas, is grossly overused. I predict the same thing will happen with web connectivity. It will be used in most cases because (a) programmers want to learn it, and (b) marketing people think it sounds cool.
Like everyone else, I learned COM way back when. But I have never been able to rationalize a real need for including it in any of my apps.
Most apps don't need "fancy" virtual functions, they can use the old fashioned kind just fine.
"There's a slew of slip 'twixt cup and lip"
|
|
|
|
|
John Simmons / outlaw programmer wrote:
that I have to install their crap directly off the web, or that net access is required to install something.
So you prefer waiting for a CD so that you can install patches to make your system more secure? Ummm, ok, whatever floats your boat John.
All I know is that internet updates, even on my dismally slow line, are a very useful and beneficial feature. I can be alerted minutes after the update is available (as opposed to month later when MSDN arrives) and download it normally in 30 minutes.
Also how is it different in terms of security or trust in installing a patch off the CD and installing a patch from the internet? What is the difference? One is just more seamless.
John Simmons / outlaw programmer wrote:
They (management) wanted to find a reason to web-enable an app I worked on for 12 years
What was the app John?
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
Paul Watson wrote:
We should not be creating applications. We should be building components.
that makes the huge assumption that the components we make will handle all of the special needs that the consumers of the components (other programmers) can come up with. sadly, i don't feel that will ever happen, with a component of any complexity within an application of any complexity.
as an example, take a look at the plethora of "string" classes available to MFC programmers: CString, std::string, std::wstring, CComBSTR, _bstr_t, the dozen or so on this site, etc.. none of them alone can satisfy every programmer's string requirements, so we have thirty to choose from and more coming all the time. people always run into some requirement that forces them to write their own string class.
for a larger example, take a look at image processing. there's no way someone could come up with a component that can handle everyone's image processing needs - and if it did, it would be too fat for someone, and then they'd have to create a smaller one.
i guess my point is this: components can only work if your design process is flexible enough to accommodate the idiosyncracies of the available components; you must be willing to change the design if you discover that component X prefers things this way, but component Z wants them that way. because if you can't be flexible, you're going to have to write your own component Z - and that's where we are today.
-c
Smaller Animals Software, Inc.
You're the icing - on the cake - on the table - at my wake. Modest Mouse
|
|
|
|
|
Looking at this post, you seem to be seriously caought up in programming, to the point of having lost track of the real world.
Paul Watson wrote:
We should not be creating applications. We should be building components.
What good will a bunch of components do for the end-user? They have to connect them up to do something. That's programming. We should be creating components as a side-effect of our apps development, to simplify future app development - but the user doesn't care about that.
Paul Watson wrote:
What do we really need to aply data, so becoming knowledge? We need generic components which focus on doing one or a few things very well.
Here is my list so far:
Nice fantasy, but only applies to that segment of the industry that deals with report generation. Consider my situation: I'm writing an editor and data management system for a sound sampler. None of those components you mention would be of any use to me, other than the file management. The point: when you get into the real world, applications require a lot of domain-specific code. A hell of a lot more than the component code, in my experience.
I'm not anonymous, I'm Jim Johnson; for some reason the system has chosen to log me out, and I don't feel like figuring out why.
|
|
|
|
|
See Signature wrote:
Looking at this post, you seem to be seriously caought up in programming, to the point of having lost track of the real world.
Looking at your post I see that you dream too small.
*shrug* I will pray for you
Thanks for your comments and opinions on the matter. I honestly did not have enough time or space to really say everything there is to component centric development so naturally you had more than one way to rip into what I said.
Some day I will put it all down into a good article and then we can really argue
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
I'm with you.
Ever since I started working with Java and hearing about how JavaBeans were going to revolutionize component-based development, I've been itching to see this happen.
So let's you and me try to make another push for it, huh?
If I understand you, you're suggesting that we would be working with a "Unified Data Format" of some sort (I've already submitted the paperwork for the trademarks ) - some way of saying that formatted text for instance is ALWAYS stored like this... Is this right? So any application (implemented with these components) could open and data source (file/web stream/email/whatever) and access the formatted text segment(s)?
I can see people shouting about the extra space taken up on disk. I saw screw'em.
This has really got me thinking... Are you going to write more about this?
J
|
|
|
|
|
Jamie Hale wrote:
that we would be working with a "Unified Data Format" of some sort
While in an ideal world I would like that, I know that it would not be accepted in our world. You will always get the JPEG guys saying that optimisation, not unification or interoperability, is more important. You will get the MS guys who want to not just use the data format, but control and own it. They say they are all pro XML but frankly if that were true then why is Word, Excel, Outlook etc. still using proprietary data formats?
Anyway. The way forward is not to battle on with a unified data format, but to battle on with a translation-wrapper for all the propreitary formats that presents what looks like a unified data format to the components which modify the data.
I know us developers hate having a middle-man sapping performance when ideally we could go direct to the source, but if you want interoperability then you need that middle-man, much like .NET with the CLR (I think MS finally figured out that instead of trying to control the underlying OS on all platforms they have to produce a middle man which any application will *think* it is on a unified OS, kinda thing.)
XML is a perfect wrapper, well perfect in that I have not seen anything better. Plus it is already controlled by a "benign" world wide standards organisation which is respected, or at least getting more respect.
Jamie Hale wrote:
some way of saying that formatted text for instance is ALWAYS stored like this... Is this right? So any application (implemented with these components) could open and data source (file/web stream/email/whatever) and access the formatted text segment(s)?
Well, the important bit is "fooling" the components which modify data into seeing only a unified format, as you say. So you can have a JPEG file but it is then translated by XSL into the unified XML format. That unified XML format is then presented to the components, not the original JPEG. Then the component modifies the unified format and on save XSL translates that unified format and its changes back into the proprietary JPEG format.
So in the background, same old propreitary file formats. There are simply too many DOC, GIF, JPG, HTML, CS etc. files out there to expect them all to stop being proprietary and change to the unified format.
Also by having this middle-man layer we need not try and change the underlying OS and how it works. Our middle-man would just be another application you install, like the .NET Framework (truth be told I am an MS man and I had bad experiences with Java apps, which soured me against using the great Java language.)
I doubt very much any company, even MS, can right now just roll out a unified, data-centric system and be successful. We saw Be fail at what it tried, what it was was good, but they tried to take over, instead of trying to insinuate themselves between the user and the OS.
Also, and the similarities are freaky, like .NET our middle-man could be ported to sit ontop of other OSs, like Linux etc.
Jamie Hale wrote:
This has really got me thinking... Are you going to write more about this?
There is so much more we have to think about and put down in words. I have a lot of it up here *taps head* , as I am sure you do to, which needs to be written down so that others can get the idea. Right now I have written down just a small bit of it, so people will be able to poke a lot of holes in it, but most of those pokes they make I have answered up in my head. Frustrating!
Jamie Hale wrote:
I can see people shouting about the extra space taken up on disk. I saw screw'em.
I say, hug them. Then, as you hug them, rifle their pockets, translate what they have to the user and then translate the uses modifications back to the guy you are hugging. It would be a monumental task to tell every proprietary format to change to our way. For even the medium term it is not really a possible goal, in the long term, once the unified format which wraps up the proprietary formats is seen as to work, then the proprietary guys will come around.
So what are your thoughts? How do you see it all working?
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
<small><b>Paul Watson wrote:
</b></small><i>They say they are all pro XML but frankly if that were true then why is Word, Excel, Outlook etc. still using proprietary data formats?</i>
Allegedly, because the code for those applications is so old and convalouted that making a seriously big change like saving files in an XML format would take a major overhaul.
<open source view>
Of course the real reason is they'd lose the key reason people keep using Microsoft office apps, coz no other app can read the data back in
Michael
|
|
|
|
|
Michael P Butler wrote:
Of course the real reason is they'd lose the key reason people keep using Microsoft office apps, coz no other app can read the data back in
LOL, exactly.
Michael P Butler wrote:
Allegedly, because the code for those applications is so old and convalouted that making a seriously big change like saving files in an XML format would take a major overhaul.
Said by the same company that takes the development world and upends it with .NET. I think your real reason is far more valid
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
I'm getting it. Ok... The translation layer makes a hell of a lot more sense.
So another issue that crops up is the whole COM-like binary compatibility. I don't see why this environment can't run cross-platform (Wintel, Linux, Mac, etc.) In order to be completely useful to many different developers on many different platforms, we would need some form of binary compatibility. I see no point in rolling our own when there are at least 3 fine options available to us already.
First we have .Net. Microsoft will have covered all of its platforms. There is a project in the works to port it to Linux. And I would guess that Mac will be covered soon enough.
Second we have Java *cross himself*. This has been ported to many many platforms already, but it limits developers to one language.
Third we have CORBA. *ick* 'nuff said.
We would also need some method of registering data types and their appropriate translators - I think? Of course the registry would be fine for Windoze machines, but some other method will be required for Linux and Mac. Probably just one layer of abstraction to hide those details - one unified interface for accessing that data.
Anyways... just a few more thoughts. I gotta get to work now.
J
|
|
|
|
|
Jamie Hale wrote:
First we have .Net. Microsoft will have covered all of its platforms.
Well MS themselves are still not that interested in having .NET on all platforms. Windows itself is still too important to them and they won't sacrifice it's face by starting .NET platform ports. So, it is up to the boys of Mono etc. to do it. Though they are not porting the whole of .NET, just the C# compiler and the CLR I believe.
But still, that is a start and as you say .NET can be, through smart thinking of MS who won't admit it, relatively easily ported to other platforms.
And I do agree that cross-platform is important.
If anyone is serious about creating this kind of data-centric idea and they need a translator for each proprietary file then actually open source and it's minions of coders would be of great help. If you restrict it from being open-source then you end up with us covering a lot of Windows file formats and loosing out on all the Linux, Unix and Mac formats.
Also, as far as I understand open-source you can have bits of your code open-source and other bits non-open-source, which is great.
Jamie Hale wrote:
Probably just one layer of abstraction to hide those details - one unified interface for accessing that data.
That sounds exactly right.
Jamie Hale wrote:
We would also need some method of registering data types and their appropriate translators
Naturally there would be a need for this registering on a machine level and also on a world level. i.e. on the machine the person has various translators (all the standard ones plus whatever additonal ones they downloaded) and then on the internet there is a UDDI like registry of the available translators etc.
Really, UDDI solves the "global" issue and WSDL solves the machine level issue. While we need not neccesarily use those standards they are there and they have been thought through and they are standards, which is great. Plus they are cross-platform as they are totally made from XML. That also ties nicely into .NET (or J2EE if we go Java.)
I am very big on standards and devolving control of standards away from profit-engaged organisations. It helps acceptance a lot as well as generates a good community who can all pitch in their ideas.
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
Paul Watson wrote:
So, it is up to the boys of Mono etc. to do it. Though they are not porting the whole of .NET, just the C# compiler and the CLR I believe.
Right, but as far as I can tell, the CLR is all we need, right? At least for run-time? Linux developers wouldn't be able to write their "code" *tee hee* in VB. Oh well.
Paul Watson wrote:
If anyone is serious about creating this kind of data-centric idea and they need a translator for each proprietary file then actually open source and it's minions of coders would be of great help. If you restrict it from being open-source then you end up with us covering a lot of Windows file formats and loosing out on all the Linux, Unix and Mac formats.
Agreed.
Paul Watson wrote:
Really, UDDI solves the "global" issue and WSDL solves the machine level issue.
Remind me... what's UDDI? Universal Data Definition Iguana?
Paul Watson wrote:
I am very big on standards and devolving control of standards away from profit-engaged organisations. It helps acceptance a lot as well as generates a good community who can all pitch in their ideas.
Amen brother.
I'm wondering too if the core components could even be "registered" someplace global. Then, when a user goes to run a new application, the framework could check the version/existence of the required components on the user's machine, and download updates automatically if necessary.
Required components/systems (large scale) as I see them are as follows:
- Core Data Model - This is what everything gets translated into.
- Translators - These are the bits (probably a stream-type thingy) that either take data from a stream and build a "core data object thingy" - or take a "core data object thingy" and write it to a stream.
- Streams - These would hook up to files, web-connections, email-connections?, etc. and would stream the data into or out of a translator, or into or out of a "core data object thingy".
- Translator Registry Thingy - This would be probed for the correct translator for a given data stream. If a download is necessary to get the latest version, it's done here.
So far, we can read data from an input source (file, web-page, ftp-stream, email stream, etc). If it's already in the "core data object thingy" format, no translation would be necessary. If it's in some recognized format, it's loaded through the appropriate translator. If the user doesn't have the translator locally, the registry thingy does the lookup and update.
When writing, the user/application developer has the choice of writing back through the same translator to the same file format - or through a different translator to a different file format - or through no translator to write it to "native core data object thingy" format.
Of course you had mentioned versioning and whatnot. I'm guessing that would sit inbetween the CDOT and the translator to provide versioning data?
One big thing to look at is whether or not to allow several different translators for the same format. I think we would want to ensure that all efforts went to maintaining an existing Word file translator for instance, rather than creating a new one when someone needed a new feature. This could be sorted out through the web registry thingy.
On the other hand, perhaps we want to consider categories like COM. Perhaps a single translator could handle several categories of data. Perhaps one Word translator could give read-only access and someone could create a commercial read/write translator component - each would be registered with the same category...
Ok. I'm at work - I should be doing some.
J
|
|
|
|
|
Jamie Hale wrote:
Remind me... what's UDDI? Universal Data Definition Iguana?
It is a global, open directory of web services, like the yellow pages of web services.
It stands for Universal Description, Discovery and Integration.
The great thing is that it can be browsed either via a nice human usable web browser or programatically. i.e. I can run a query against it using SOAP and the UDDI methods and return a list of all web services which meet my criteria.
Now either UDDI is sufficient for our purposes or we can take it and slightly modify it for our purposes. The advantages of using UDDI as it is are obviously clear as it is backed by MS, IBM and a whole bunch of other big names. The servers are there to use, free (yes, free ), and UDDI is an accepted standard.
Jamie Hale wrote:
I'm wondering too if the core components could even be "registered" someplace global
I would like to see that, very useful in the long run.
Jamie Hale wrote:
and download updates automatically if necessary.
As we have seen lately not everyone loves the obvious benefits of automatic updates. So some choice would be neccessary.
Jamie Hale wrote:
If it's already in the "core data object thingy" format, no translation would be necessary.
Overall the objective would be to slowly get everyone to use the unified format. By using the system they would automatically be using the unified format, even if they are editing a JPEG. In the long run at the very least the source files that people actually edit will be the unified format, while JPEG, GIF etc. would stay as they are but only be used for viewing (pretty much as PSDs are now. web sites are designed in a PSD and then the bits are exported to GIFs for viewing.)
Jamie Hale wrote:
When writing, the user/application developer has the choice of writing back through the same translator to the same file format - or through a different translator to a different file format - or through no translator to write it to "native core data object thingy" format
Very cool idea! That actually introduces a "global converter" system as a side benifit. Hadn't thought of that
Jamie Hale wrote:
One big thing to look at is whether or not to allow several different translators for the same format. I think we would want to ensure that all efforts went to maintaining an existing Word file translator for instance, rather than creating a new one when someone needed a new feature.
Control is bad, very, very bad. People hate being controlled and they will harm themselves before they let you control them (we see this with Linux zealots.)
So we cannot have a "you will only use the Jamie Hale Word Translator Component" situation. (I know you know this )
So definitley there will be lots of translators and editing components out there written by differen companies, all competing, offering better features etc. However a base component which is part of the organisation which wrote the system should be available for major formats. That way we can set the foundation and expectations and the community can work from there. We must not be big brotherish and say "The MS implementation of the Word Translator should not be used" or similar. Just let it grow and people can use whatever components they want from whomever they want.
Jamie Hale wrote:
Perhaps one Word translator could give read-only access and someone could create a commercial read/write translator component - each would be registered with the same category...
Sure, but I think that could get a bit complicated. I think a "base level" of features is needed for all translators. COM "failed" because it had no standards to speak of. People made rubbish components. So a translator needs to at the very least be able to read and write to and from the unified format to the proprietary format. Additional features are wonderful, but some kind of standard must be kept.
Interesting stuff
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
You hear me? Buy my book, send me $10,000 and your app will be a better app...
I could go about it that way, or I could just say what I think and let you make your own mind up about it.
Virtually every application I can think of should have some form of client/server or client/client connection. Whether it be from the desktop to a server a few feet away or from the desktop to a server ten thousand ks away or a desktop to another desktop half way around the world.
Information is one thing, so is data, but the application of this information and data to a task is what is important.
Most, if not all, apps are about managing data and providing a means for you to use that data, thereby turning it into knowledge. Even a graphics app is about data and the application of it.
By internet enabling your app you are opening it up to a much larger world of transportation of data and therefore posibilities of applying the data. If you keep your data cooped up on your system alone, nobody else can use it or benefit from it without some mundane emailing/disking/printing/etc.
Take the old word processor. I create a document, save it to my disk and have to send it via email to a colleague (on the otherside of the world) so that they can review it. They then edit it, putting notes, and send it back. Meanwhile I still have an unedited copy on my disk which I have to store in an archive folder to keep it from being overwritten when I get the colleagues review back. This process can happen many times.
If the app was internet enabled we could "share" a copy quite easily and the reviewing process would much easier. The system could automatically track all the changes and provide automatic versioning.
When I wanted to publish the document I would not as I do know have to email it to an editor who then posts it on their site. Rather I just click a "publish" button, the editor gets and the document is shared again.
I am not saying put an IM in every app, we have seperate IMs which can integrate with apps pretty well for that.
But by internet enabling apps you simply extend the use and usefulness of your application. Obviousl you have to do some analysis on what usefulness internet enabling your app actually provides, but you can be assured that mosts apps will be better off with some internet enablement.
On the topic of internet updates I am fully for it, as long as there is control so that large companies can keep control of that update process amongst all their desktops.
So what apps do you think shouldn't have or don't benefit from internet enablement?
regards,
Paul Watson
Bluegrass
Cape Town, South Africa
"The greatest thing you will ever learn is to love, and be loved in return" - Moulin Rouge
Sonork ID: 100.9903 Stormfront
|
|
|
|
|
Paul Watson wrote:
On the topic of internet updates I am fully for it, as long as there is control so that large companies can keep control of that update process amongst all their desktops.
In reality, that may not be feasable. A lot of big companies like to test out apps and upgrades, make sure they don't destroy anything, and then deploy them to the user. So your sysadmin would likely allow updates from the intranet, but probably NOT from the internet. Any Internet update mechanism should be flexible enough to handle this scenario... (e.g., do not hardcode your company's URL in there, make it configurable.) As an example, my company does this with virus-protection software... we don't update directly from the virus company's homepage, but rather a server gets those updates, and we update from the server.
Paul Watson wrote:
.
So what apps do you think shouldn't have or don't benefit from internet enablement?
Device drivers. Although even there, it depends on the device. If the device can exist over a network or somewhere on the Internet, then it makes sense.
The early bird may get the worm, but the second mouse gets the cheese.
|
|
|
|
|
Paul Watson wrote:
Take the old word processor. I create a document, save it to my disk and have to send it via email to a colleague (on the otherside of the world) so that they can review it. They then edit it, putting notes, and send it back. Meanwhile I still have an unedited copy on my disk which I have to store in an archive folder to keep it from being overwritten when I get the colleagues review back. This process can happen many times.
If the app was internet enabled we could "share" a copy quite easily and the reviewing process would much easier. The system could automatically track all the changes and provide automatic versioning.
...
I am not saying put an IM in every app, we have seperate IMs which can integrate with apps pretty well for that.
Well, I think you're on the right track, but you're cursing the word processor for doing something it wasn't intended to do. Something like MS Word already has an option for tracking changes and limited versioning, but those options are hampered by the fact that users' workflow doesn't mesh well with the versioning.
For instance, in your example, a document comes in on your email. Double-clicking it inside your email program will bring up a copy of the document, which you then edit. Unless you save the document to a different filename, you will be editing a temporary copy that you will then lose anyway. Oops. "Save As" just exacerbates the problem -- putting the versioning of tbe program back on the user's shoulders. The user has to keep track of where to put all the versions of the document on their PC. This is all a fault of the email program, not the word processor.
The "editing files with transparent versioning" problem is a workflow/awareness/connectivity problem between email programs, editing programs (like WPs, graphics programs, etc.), and version-control systems (like CVS, SourceSafe, etc.). It's not an easy problem to fix. But whoever does fix it stands to make a lot of money.
CodeGuy
The WTL newsgroup: over 1200 members! Be a part of it. http://groups.yahoo.com/group/wtl
|
|
|
|
|
It would definitely be nice to have the versioning control you mention in your word-processing example, but I don't think that this feature should be a part of the word-processor. The word-processor itself should just be responsible for processing words, and doing a good job. The feature you describe should best be implemented seperately, or else you'll wind up with email/version controls in your word-processor, your spreadsheet, your graphics editor, solitaire, minesweeper, etc. I think it's usually wise to keep components small and specialized, and to try to preserve some sort of orthogonality.
|
|
|
|
|
Greetings,
Sorry, I'm not registered here, but I thought I should comment on this...
What you want is something like a transparent CVS filesystem. Where writes to the storage are considered checkins, effectively. Your only real difficulty comes in merge conflicts, but fundamentally if you have an intranet-shared 'storage' space, and a virtual filesystem whose backed storage is a CVS filesystem, you get what you're talking about pretty much transparently.
You really could implement this today, if you so desired.
As was said, don't add this feature into the word processor, make it transparent to the word processor. Then it'll work for your spreadsheet, your word processor, your presentation software, etc., all at once.
Small tools add up to more than the mere sum of their parts.
-- Cyberfox
|
|
|
|
|
Allowing Internet updates really depends on who your customers are. If most of them are home users or individuals with control over their systems, it can make sense. But if your users are typically big corporations, sometimes they actually *dislike* being able to update over the web. Some of these customers frown upon users making changes to their systems, and do not appriecate having a clearly visible way for users to update their software (they like everyone to be on the same version, for instance.)
The early bird may get the worm, but the second mouse gets the cheese.
|
|
|
|
|