Eddy Vluggen: That means you need a UI. That probably means you'd need to write one, generate one, or use Access as your front-end.
My thoughts were to restrict non-techie access completely from the database. They can enter data via spreadsheets which we would subsequently import. And to get data they can simply use the spreadsheets or pdfs that the reporting would generate. I found on the last project, having many people access the database at once resulted in many issues.
Eddy Vluggen: The database doesn't care where the data comes from. A is a simple import-operation. B and C cannot be automated, someone will have to enter the data. D is missing.
I actually merged D with C and forgot to change it to 3 sources after the edit. Agreed that the database doesn't care how it gets it's info. This was I suppose an extension to the previous point that if we hire people to do data entry, it would probably be best in my opinion to have those workers enter the data into spreadsheets that would be imported to the database rather than having them enter the data straight into it with forms. The less people clicking around in the db not knowing exactly what they are doing, the better IMO.
Eddy Vluggen: Data doesn't do things, it just sits there. Databases are also usually not responsible for generating a report; you will probably need to create a report-template that gets filled dynamically, with a filter for each property. I'd put the database in SQL Server, link from Access to it, and exclaim it's "done". Access has a low learning-curve and everyone could click together the reports as they see fit. Excell could probably do the same.
I guess my wording here was awkward. I know that data itself cannot do things. I meant that I would like to set up some process by which we can generate report blanks, and pull the data from the database to auto-populate the reports. I know Excel can do it via exports but then it becomes the issue of everyone wasting their time trying to make the reports look professional, consistent, and legible, not the end of the world mind you but it gets old quick when you have to generate reports continuously. As mentioned earlier I was completely unimpressed with the UI for Access's report builder but looking around I have not seen any alternatives that seem any better. My mindset is based more around the NoSQL movement so I would simply write a script to scrape the data off a document database and generate the report via HTML & CSS or XML & XSLT, but I am not sure how one might do that efficiently with an SQL database especially considering how complex joins can end up being, nor how flexible some frameworks would be about this.
Eddy Vluggen: Backups do not prevent errors; any error gets backed up along with the rest of the data. Any major database-server supports backup-operations.
Once again I agree with you, backups do not prevent errors, however they do provide a point of return if something bad does happen. Losing even a couple days worth of data entry due to a corrupted database file would not fly with our executives. Especially if connections to the database occur over a network, I find them to be very finicky and prone to small bugs. You should have seen how bad our MS Access db got corrupted just by having 2-3 people working on it simultaneously. Typos are bound to happen, very difficult to prevent them all. But if we have several snapshots of the database at any given point and an issue occurs we can restore to the most recent point that does not contain the errors and continue.
Eddy Vluggen: It doesn't work that way. There's a comparison of database-servers here[^] - take a look at the capabilities, compare them to your needs, pick one. And make sure it is SQL Server that you pick. SQL Server Express is free, you could design your db in Access (and have a working Access-db as a backup if this experiment fails), and simply import it in SQL Server when you're ready (using the upsize-wizard in Access)
I think in terms of frameworks, there is always a little give and take between performance, stability, and ease of access. Ideally they would not conflict, but the world is far from ideal and resources are finite. There will always be a point where the effort you have to put in greatly exceeds the value you get out. In that sense, I put the hierarchy as a way of saying if one thing HAD to be sacrificed I would say speed and or usability over stability. What good is speed or usability if the framework hiccups over network connections and loses input or corrupts files without error warnings.
Thanks for your responses. I think in a lot of ways you were actually agreeing with my mindset on how to proceed. I still am very undecided on the reporting process, but I hope to have that worked out soon. Access could be used in a pinch, but as I mentioned earlier I think it would make a lot more sense to do it HTML and CSS or XML and XSLT for professional looking reports to the client, and unformatted spreadsheets for internal usage. Any advice on how I might implement that would be greatly appreciated.
My thoughts were to restrict non-techie access completely from the database.
Ditto; the Access-frontend would link purely to readonly-views on their own version of the database; a simple restored backup of the production-database (on Sql Server). Then again, most people would not offer Access to an end-user and claim that it's the UI they should work with.
I found on the last project, having many people access the database at once resulted in many issues.
Yes and no. Access does it's job well, but it was never meant to store a lot of data and be accessed by multiple users simultaneous. A database-server is meant for that job, and Acces isn't a server-application - "just" a desktop app.
I guess my wording here was awkward.
Sorry for my tone/wording; it'll be equally (if not more) awkward (or hostile) at some points.
I know Excel can do it via exports but then it becomes the issue of everyone wasting their time trying to make the reports look professional, consistent, and
..aaah, yes, users are resourceful.
As mentioned earlier I was completely unimpressed with the UI for Access's report builder but looking around I have not seen any alternatives that seem any better.
The Query-Builder looks scary, but is a very powerful tool; combine that with the reporting, and you have a "simple" UI - although others will certainly disagree there. Combine that with the free runtime-version of Access, and you got a UI that will be hard for the end-user to "break".
You should have seen how bad our MS Access db got corrupted just by having 2-3 people working on it simultaneously.
Yes, especially when they're mucking in the same tables. When moving to SQL Server, be sure to read up on locking and transactions - though the concept does exist in Access, it doesn't help to improve multi-user access. Locking is what keeps the data consistent in a multi-user db, but it also introduces a bottleneck.
I mentioned earlier I think it would make a lot more sense to do it HTML and CSS or XML and XSLT for professional looking reports to the client, and unformatted
spreadsheets for internal usage
You want an introduction to ASP.NET[^] or PHP[^]. You can easily read from the database, and have your CSS applied. You can modify the data and output it as HTML or XML. With or without transformations. And you could introduce new things as you learn; start with a basic formatted list that simply shows the content of a table. Next, create one with an extra filter.
It may take a bit, but there's always the forum to help
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
Eddy's answer is deep and clean, but I would add one more - don't do it alone. Not at the beginning at least. Hire some expert to help you with the design phase and put the project on the right tracks...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
Like Eddy I am prejudiced to SQL Server, partially because I have been working with it for years but mainly because it has the most support resources on the interweb.
You are discussing multiple requirements, the primary on is to set up a database to store and manage your data, here I agree that you should get in a professional to design your data structure, it is by far the most critical component.
Next is the user interface, you need to build at least 1 application, your non tech access to the data and processes. You also need to create and interface for the existing Excel sheets into the data.
Lastly you need a reporting strategy, I would recommend SQL Server Reporting Services (SSRS) this can be set up as both server based or embedded in you application.
I believe you are in the position of a power user just dipping into becoming a developer and your tool horizon is limited to Office products, you might want to expand it to include a proper development environment.
Never underestimate the power of human stupidity
Your link shows very important information. And from that information, I conclude that it may be a bad decision to use them for logging purposes: in a transaction rollback, also the trigger actions will be rolled back. I.e. when you log by inserting into a table, that will be rolled back, too!
Sorry for the delay. For beginners(yes he is), Triggers not suggestable. We can't control that. As Eddy said, we can't dubug too. Personally I use Stored procedure instead of trigger. But I rarely use triggers for tasks such as audit trail(During Login/Logout).
Code converters | Education Needed
No thanks, I am all stocked up. - Luc Pattyn
When you're wrestling a gorilla, you don't stop when you're tired, you stop when the gorilla is - Henry Minute
I'm currently trying to make a DB system scratch which I can use as backbone for my Qt application. Since I will store big amounts of data [photo RAW data] in there, I figured that a classic relational DB might not be the best solution. What I'm planning is now a classic relational DB [MariaDB] as persistent storage, cached by a No-SQL Database [MongoDB].
In the end the whole system shall be built up like this.
Here's a little additional explanation:
Persistent Storage Access
The Persistent Storage Access handles the access to the persistent MariaDB database which holds all the application data. It supplies the data used to build the Cache and if there are Update or Creation Queries, but also for Get Queries if the cache can't supply the requested item.
The Cache Access handles the access to the MongoDB database which is used to cache the access to the Persistent Storage. It contains a copy of the most recently added, updated or requested items.
Data Access API
The Data Access API coordinates access between Cache and Persistent Storage. If an item requested can't be found in the cache, the DAA gets it from the Persistent Storage and also adds it to the Cache. If an item is added, the DAA saves it into the Persistent Storage and also adds it to the cache. All this coordination is needed to take out complexity from other parts, e.g. the business classes.
I I'm in doubt whether the MongoDB will really give me any performance boosts, so if anyone could share their thoughts on what I plan here?
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
The size of the database does not have very much to do with the performance. (Also remember that they have memory caching built in.)
The question is how you're going to use it.
The big weakness with MySQL/MariaDB is joins and aggregations with lots of data and a lot of tables. And I doubt that's how you're going to use it if you're having an image database. Fetching single items or ranges from an indexed table is actually where MySQL/MariaDB excels. Keep in mind that Google actually uses a (highly modified) version of MySQL.
My key rule for performance is that it's the amount of data you need to handle at a given moment that matters.
I have started an article[^] on the program I want to write.
The idea behind it that I can assign photos to albums - One photo can be part of any amount of albums, and can have any numbers of tags assigned.
Furthermore, the photos shall be versioned, e.g. there is the base (original) photo and several versions on which photoshop was applied.
Possible queries are (apart from the usual insert etc..):
- Get all child versions of a photo
- Get all photos for a given album
- Get all albums a photo appears in
- Get the parent versions of a photo
- Get tags assigned to a photo
- Get photos which have tags X & Y [...] assigned
- Get photos which have tags x & z assigned, but not Y
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
I think you don't need the cache server, in your case it only adds to the complexity without adding much functionality.
Had you been working with large queries that aggregates down to small data volumes <edit>that are reused</edit>, it would be a completely different situation.
I also think you're a bit wrong about SQLite. It's a file database alright, but so are (almost) all databases. The problem is that it's a SINGLE file database, so when it has grown out of the harddisk(s) on the server you're in trouble. But the real problem it has is that it's bad with concurrency.
So assuming that you anticipate some growth I agree with your choice of MariaDB for this purpose.