|
We just put online a website managed by a CMS, Joomla. We've got a test server and a production server. The question we ask ourselves is what is the correct strategy to update the website content: should we give editors the right to directly modify the production server through Joomla's web interface, or should we restrict edition to the test server (or a clone of it) and find a way to automate the transfer from test to production?
The first solution is the easiest one but seems the most dangerous (for instance how do we reconstruct the production server data if it crashes?) , the second one is safer but designing an automatic way to make the transfer is not straightforward...
So in your opinion, dear fellow Cpians, what would be the best thing to do? Any food for thought is welcome!
When they kick at your front door
How you gonna come?
With your hands on your head
Or on the trigger of your gun?
Fold with us! ¤ flickr
|
|
|
|
|
KaЯl wrote: for instance how do we reconstruct the production server data if it crashes?...what would be the best thing to do
You can't answer the second question until you answer the first. And that can only be answered by your business.
You might also ask yourself the following
- What happens if someone does an edit and wants to revert it?
- What happens if someone doess an edit and that edit itself is the cause of the crash? What if it doesn't cause a crash for two weeks?
- What if someone makes an edit that is inappropriate? How do such edits get reviewed?
KaЯl wrote: the second one is safer but designing an automatic way to make the transfer is not straightforward...
You need a process to do a production install. Whether it is automatic is a secondary issue.
|
|
|
|
|
Thanks for your input.
When they kick at your front door
How you gonna come?
With your hands on your head
Or on the trigger of your gun?
Fold with us! ¤ flickr
|
|
|
|
|
If you're talking about just the content, not the system code, I think you should update directly on production server and have a good review workflow.
the other option will be a bit hard to manage if you allow site users (not editors) to post comments for example. How will you manage the data then?
|
|
|
|
|
Indeed, it's the choice we made. Editors have to be responsible, they aren't children anymore, are they?
And if they mess with the servers, then I should still have my whip somewhere, if they deserve a little punishment...
When they kick at your front door
How you gonna come?
With your hands on your head
Or on the trigger of your gun?
Fold with us! ¤ flickr
|
|
|
|
|
I am about to start developing a new website, and I am a little stuck as to the correct architecture to use to achieve what I need.
I have been given access to a data feed, which provides me with live, real time data. I need to cache this data and do some processing on it, and I can only create one connection to this service - it will block any subsequent attempts from the same IP. Therefore directly connecting via a website is not going to be feasible.
It looks like I will need to create two separate parts to this project - a back end service that retrieves the data and performs any processing required, then a separate website that accesses the processed data. As the data is real time - the ability to update the web page in real time would also be desirable. As well as this, I also anticipate adding a few mobile apps in future, which will also need to connect to the back end system.
Does anyone have any suggestions on what technologies to use? Obviously Json would be ideal for the inter-process communications, but what kind of back end technologies can I use that easily supports json while still allowing a persistent service that can maintain a permanent connection to the data source?
Anyway, i'm rambling a bit. I hope it's clear, and I look forward to people's advice.
|
|
|
|
|
Member 849873 wrote: the ability to update the web page in real time would also be desirable. As well as this, I also anticipate adding a few mobile apps in future, which will also need to connect to the back end system.
Depends on your definition of "real time" but humans don't operate in "real time" anyways. So for example it is pointless to attempt to update a GUI 5 times a second just because the data is received that quickly.
Member 849873 wrote: can I use that easily supports json while still allowing a persistent service that can maintain a permanent connection to the data source?
You need requirements and an architecture design before deciding on "technologies".
For example the following questions need to be answered
- Exactly how fast does this data arrive?
- Does it need to be retained (persisted)? If so for how long?
- The user apps do what with the data? Graph it? Scroll a bunch of numbers? What?
- How many users will there be? And then ask this again REALISTICALLY how many users will there be?
- Based on the above information what are realistic long term volume needs? This includes storage and network.
- Does the data feed stop/slow? Or specifically how do you detect if the connection has stopped receiving data? Additionally does the source allow you to remain connected for long periods of time? (It might require a reconnect or a heart beat message.)
- How will admin occur? For example if the connection goes down does someone need to be notified? If so how? If the main site goes down does someone need to be notified? How will you know of the main site went down? (Obviously these could be answered by waiting for the users to complain but that might not be ideal.)
|
|
|
|
|
I was going to make the same comment in regards to the "real-time" aspect of things... it has a broad range of meanings depending on context.
For DSP engineers, real-time is generally hardly feasible on a desktop (depending on rates), so there's hardly any point in trying to refresh anything a human will "see" and interpret in that context.
|
|
|
|
|
Member 849873 wrote: I have been given access to a data feed, which provides me with live, real time data. I need to cache this data and do some processing on it, and I can only create one connection to this service - it will block any subsequent attempts from the same IP. Therefore directly connecting via a website is not going to be feasible.
Let me rephrase that; you have a feed with data, and you can get the feed once. What does the word "realtime" do here?
Member 849873 wrote: a back end service that retrieves the data
Once as you explained. Since it's a feed, there's no way to stay connected to that. Then what? Change your IP and fetch the feed with real-time data again?
Member 849873 wrote: As the data is real time - the ability to update the web page in real time would also be desirable.
You mean that CHANGES made by an employer in record A should also change that employee-info "straight away", even while the manager is looking at the webpage?
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
My advice is to have one part of your project worry about keeping the connection alive and putting that data into a database (on your side) as quickly as possible, then do any analysis against the data in your database, not on the way into the database.
|
|
|
|
|
My question is about CommonApplicationData folder, how to use it. In my setup program I have created "Application Data Folder" then sub folder [Manufacturer]\[ProductName] where I store my data files but my question is during the testing/debuging process where I store those files so my program can access them or do I have the change the path after I done all the testing?
|
|
|
|
|
Why can't you store the files in [CommonApplicationData]\[Manufacturer]\[ProductName] ?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Hi,
I am trying to be fastidious about following enterprise application architecture best practices in my development and that includes using design patterns.
I have a new requirement to write a C# service using .NET 3.5 and what it does is, after a timer goes off, it runs various database queries to retrieve needed information and then writes out a text file to disk. The text file contains the data, but the file has to be formatted in accordance with a set protocol.
This is not client/server. This is an automated, faceless process implemented as a Windows Service.
Any tips or tricks on which VS tools/techniques and patterns in the VS2008 and .NET 3.5 world would be most useful for this type of requirement? I'm only asking because I am still relatively new to patterns.
Sincerely Yours,
Brian Hart
|
|
|
|
|
Store as an XML file, consider XML processing as you are dealing with multi-databases, don't worry about design pattern at this point, it should work fine
|
|
|
|
|
Brian C Hart wrote: I have a new requirement to write a C# service using .NET 3.5 and what it does
is, after a timer goes off, it runs various database queries to retrieve needed
information and then writes out a text file to disk. The text file contains the
data, but the file has to be formatted in accordance with a set protocol.
Pretty sure you can do that without C# at all.
Windows has a scheduler. Most databases provide output capability. So is there some other requirement?
|
|
|
|
|
I am in an old fashioned shop that has distributed a complex software system to manufacturers of packaging, some who haven't even upgraded from Win2000. So I have no choice but to do a C# service. So I am asking what is the best design pattern to use for a service? Will some one please just answer me that question. Boss is making me do it this way. I have no other choice.
Sincerely Yours,
Brian Hart
|
|
|
|
|
You might look at a translator pattern.
However your description isn't that complex so without additional requirements it is just a basic export process.
|
|
|
|
|
I'm starting a new WPF app. The app's visual layout is pretty simple[^].
I have done apps like this before, but I'd like to revisit the way I do this.
Basically, the main content area contains a Content Presenter that is bound to a property called MainContent in the MainWindowViewModel. Then logic in the MainWindowViewModel determines which view to bind. The main content is then dynamic based of business logic.
For example, after installation there will be a series of views, like a wizard, that walks the user through the intial setup. Then, once up & running, what is displayed there is based off what action the user choise.
I had a conversion with a co-worker who was totally against this design because he thought that the VM's should not know about any views. He is right in that the MainWindowViewModel has a method called LoadView that accepts an enum containing the names of the content items to show.
So, if the user clicks a customer account, the click handler would pass AppViews.CustomerAccount to LoadView, and the LoadView method looks like this:
private UserControl _MainContent;
public UserControl MainContent
{
get { return _MainContent; }
set
{
if (_MainContent == value)
return;
_MainContent = value;
RaisePropertyChanged("MainContent");
}
}
public void LoadView(AppViews ViewToLoad)
{
_ViewModeBase vm = null;
UserControl view = null;
switch (ViewToLoad)
{
case AppViews.CustomerAccount:
vm = new CustomerAccountViewModel();
view = new CustomerAccountView();
break;
}
view.DataContext = vm;
MainContent = view;
}
Again, I have done this with success a few times, but I want to make sure it's a solid design.
Anyone have a better way?
Thanks
If it's not broken, fix it until it is
|
|
|
|
|
If you really want to separate a bit more, use a DataTemplateSelector (ContentTemplateSelector property) to choose which view to display, instead of selecting it in LoadView. Instead of setting the DataContext on the View, bind the DataContext of the container to a VM property.
Either way, you'll have to use a bit of code-behind to select which view control to load... I mean, you could go through a LOT of effort to get all of that in the XAML, but in my opinion, it just isn't worth it.
|
|
|
|
|
Ok, but doesn't that require create a template for every view, instead of user controls. A user control would have its own ViewModel.
If it's not broken, fix it until it is
|
|
|
|
|
The template would, effectively, simply wrap the user control.
|
|
|
|
|
Ok, I'm not sure I understand.
A user control is its own file, correct? Whereas a template is defined in a resource dict?
What is the benefit of one over the other?
If it's not broken, fix it until it is
|
|
|
|
|
The user control would still be there as a discrete object (the View in other words). The data template would simply marry the view to a particular view model. If you look at Josh Smith's classic MVVM article[^], he demonstrates this neatly with:
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:vm="clr-namespace:DemoApp.ViewModel"
xmlns:vw="clr-namespace:DemoApp.View"
>
<DataTemplate DataType="{x:Type vm:AllCustomersViewModel}">
<vw:AllCustomersView />
</DataTemplate>
<DataTemplate DataType="{x:Type vm:CustomerViewModel}">
<vw:CustomerView />
</DataTemplate>
</ResourceDictionary>
|
|
|
|
|
Ok, I see. I'v read it, but I'll go back & re-read now.
So then which template is used is determined by some data, rather than by business logic?
If it's not broken, fix it until it is
|
|
|
|
|
I'm sort of there on this...
Can you show me how in the MainWindowViewModel this code is used? For example, if the user selects Show All Customers, how does the template above get used/loaded in the MainWindowViewModel?
I saw in his article where it says
"The MainWindowResources.xaml file has a ResourceDictionary. That dictionary is added to the main window's resource hierarchy, which means that the resources it contains are in the window's resource scope. When a tab item's content is set to a ViewModel object, a typed DataTemplate from this dictionary supplies a view (that is, a user control) to render it"
So the Content is set to a VM, not a View?
If it's not broken, fix it until it is
|
|
|
|