|
The answer is likely to be different for different database systems, so you most probably need to study the documentation for each one.
|
|
|
|
|
Thank you for your response. I appreciate it. I will try eventhough might be very hard for me with pretty much no experience on the DB level.
|
|
|
|
|
What you want is called "profiling" and is not used for logging but debugging. Should be easy on SQL Server. I would not use it for logging - usually a database has so many transactions that reading a logfile by yourself is not very productive.
If the application is yours (and the only one accessing the db), then you can make a decorator for the IDbCommand and IDbConnection that wraps the existing commands and do the logging from there.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Hello "I appreciate your reply from Hell programmer", (just joking about the name, but serious about the appreciation part)
Thank you for your reply; as of right now totally out of my league but gave me an impression that your response is on to something I am really looking for. Your profile gives me an impresison that you are more of a technology writer, blogger rather than a hard core geek/programmer. Am I totally wrong?
You mentioned MS SQL is easy to do; but how about other databases like Oracle, Db2, MySqL, etc?
Do you mind sharing your personal/work email?
Much appreciated, thank you.
|
|
|
|
|
Member 13536030 wrote: Am I totally wrong? I don't have a blog, didn't write any book.
Member 13536030 wrote: You mentioned MS SQL is easy to do; but how about other databases like Oracle, Db2, MySqL, etc? For MSSQL, there is a profiler-application that is part of the enterprise-version; it'll show you any SQL command that is executed against the server. In Oracle they don't call it a profiler, but a trace; each large db-vendor has their specific tools. These tools have the advantage that they show everything that is communicating with your database.
If you're going to use it for debugging, then you're only interested in the SQL that your application generates. In that case there are other tools that help, like MiniProfiler: A simple but effective mini-profiler for .NET and Ruby[^]. This requires a change in code, but does not depend on a specific type of database.
Member 13536030 wrote: Do you mind sharing your personal/work email? Yes. Any question can be asked and answered on the forum, for all to benefit. For you it has the added benefit that if I am wrong, someone will correct me
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I am creating an application that will read log/profile files and parse it to find, anyalyse SQL queries (to start with only "select" statements) that are executed for a few particular fields and those tables/views that use these files. So for this application to work, the application should know the format of the file it reads. Let me see if I can be more specific:
I have a field called 'field1' used by a table called 'table1'. I want to turn on / configure something in my MySQL database (it could the profiling) tool you mentioned with the goal that every night I can get a file that shows any "Select" statements against this field and table. Is that possible?
Thanks.
|
|
|
|
|
The mini-profiler mentioned would do that. Tracing in MySQL is discussed here[^].
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Based on other responses.
So database is MySQL and you want to look at what statements were used. From that you want "General query log"
MySQL :: MySQL 5.7 Reference Manual :: 5.4 MySQL Server Logs[^]
However be very careful if you are using this on a production machine. You want to insure that the log files do not use up the entire file system.
Member 13536030 wrote: if someone does a select * from any of the tables
Just noting that is a bad idea. Shouldn't be allowed. Hinders maintenance work and things like exactly what you are doing.
However you can't tell whether the client is using a specific column or not even if it was specified. But if "*" is used then you will know for a fact it is being returned.
|
|
|
|
|
I've always done Linq to SQL which I like. It's easy to setup and work with.
I follow a typical n-Tier architecture: UI <> BL <> DAL <--> DB
I've only set up Entity Framework once. When it generated the Data Context all the entities were in the same DAL project. This means that my UI, BL, and other projects in the solution all needed a reference to the DAL. This seems like a tight coupling. I was able, with a lot of work, to generate the entities into a separate project. Then, the other projects in the solution only needed to know about the new entities project, but again it was a lot of work to set up.
So, how is EF typically used? What's the right way to set up the DC and entities?
Thanks
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Use the "Code First" approach, even if you're connecting to an existing database. That way, you can put your entities wherever you like, so long as the DAL project has a reference to them.
If you're using an existing database, there's even a wizard to create the code first classes for you, which you can then move to another project as required:
Entity Framework Code First to an Existing Database[^]
The BL layer might be able to reuse the EF entity classes. But you'll probably find that you need to make changes to the layout, particularly when dealing with navigation properties. In that case, you'll want a parallel set of DTO classes, and something like AutoMapper[^] to map data between them.
The UI layer will probably need its own parallel set of view-model classes, with UI-specific validation, property change handlers for WPF, etc.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Not if you lean on interfaces and follow the repository pattern to wrap your EF DataContexts. I generally throw a "core" or "common" library into my solutions to serve this purpose, and MEF my repository into the mix. This is also nice because it does not marry you to the EF implementation.
Yes, you will be coupled to an abstraction, but that's more or less the point in my eyes.
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
I want to change the old store to new store by using the query in MSSQL. For example,
Staff_Id Name Transaction_Date New_Store Old_Store
--------------------------------------------------
1 AA 1.1.2017 1
1 AA 1.2.2017 2
1 AA 1.3.2017 3
1 AA 1.4.2017 1
1 AA 1.5.2017 4
I want to change the table to following like that
Staff_Id Name Transaction_Date New_Store Old_Store
--------------------------------------------------
1 AA 1.1.2017 1 NULL
1 AA 1.2.2017 2 1
1 AA 1.3.2017 3 2
1 AA 1.4.2017 1 3
I wrote the query like that, but it's not true, it's output randomly
Update EC1 SET EC1.Old_Store=EC.New_Store
From Employee_Change EC , Employee_Change EC1
WHERE EC.Staff_Id=EC1.Staff_Id
AND EC.Transaction_Date>=EC1.Transaction_Date
|
|
|
|
|
Something like this should work:
UPDATE
EC
SET
Old_Store =
(
SELECT TOP 1 New_Store
FROM Employee_Change As EC1
WHERE EC1.Staff_Id = EC.Staff_Id
And EC1.Transaction_Date < EC.Transaction_Date
ORDER BY EC1.Transaction_Date DESC
)
FROM
Employee_Change As EC
;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
This has been answered at Change old store to new store[^]
In future, do not cross-post across forums. Pick one and be patient - we do this in our spare time for free - sometimes it can take a little while
|
|
|
|
|
Hi,
I am running an SSIS Package which takes FileName as Variable and runs the Package, strangely when I run the Package from Sql Agent Job Package I am getting this error: Did not read byte array properly
But when I run the Package, if the file is open, its reading the file and running the Package properly, if I close the File it gives me error, the Package is using the Jet 4.0 drivers and File has multiple sheets in it, I am giving the name of the second sheet to read,
- is it because of the multiple sheets or
- is it because of the Jet drivers,
- The file path is too long (I saw its 150 characters as it shows, the variable datatype that holds the File name is String)
- or if the File full Path combined with connection string value, does it become too long to be hold by the Connection string of the Source Connection?
I am not sure what is the reason for its throwing this error.
Any suggestions welcome please need some help. Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Abdul asked: 1.is it because of the multiple sheets Try replacing "the file" (I'm guessing this is a workbook) with one with only a single sheet and see if the problem still happens
Abdul asked: 3.The file path is too long (I saw its 150 characters as it shows, the variable datatype that holds the File name is String)
4.or if the File full Path combined with connection string value, does it become too long to be hold by the Connection string of the Source Connection?
Try shortening the path - e.g. put it into a temp folder straight off the root, or map a drive to the location.
Given that it "works" when you have the file open though, it is unlikely to be any of those. The suggestions are just to indicate that you could have easily discounted them for yourself by experimentation.
You are using very old Jet Drivers and you have not mentioned which version of Office or SQL. Try installing at least ACE 12.0 and see if that fixes it.
Beyond that, without knowing what the package is trying to do we're fumbling about in the dark
|
|
|
|
|
Yeah we are using only Jet Drivers, but its limitation on Server for now. The office file is .xls file which is 97 version I think.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
indian143 wrote: if the file is open, its reading the file and running the Package properly
You might want to examine that assumption.
The fact that it doesn't produce an error that you see, is not sufficient to prove that it worked. It must process the file and produce output that it could have only gotten from the file.
|
|
|
|
|
I'm getting ready to start a new C#/WPF project for a client. The database will be SQL Server. Myself and another developer will both work remotly.
So there's two development PC's, each with SQL on them. Then my server will be used to test, so there's a copy of the DB there. Then there's the production DB on the client's server.
I expect that I'll be creating, editing, and deleting DB objects continuously. I will probably use scripts for these changes so I can then run the scripts on either the test or production servers. Also, other developers will need to run them.
The question is about how to manage the changes/scripts. What process do you follow for this? Is there a naming convention? How do you keep everything in sync?
Thanks
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
In a previous position, we used the following DB script pattern:
1) For each major release, have a set of "Create" scripts.
a) Create Table_1.0.0, Create_Index_1.0.0, Create_Sysdata_1.0.0 (or something like that)
2) Each time there is an upgrade, create a set of scripts
a) Upgrade_Table_1.0.0_1.0.01, Upgrade_index_1.0.0_1.0.1 (and so on)
Make sure you are using a source code control system where you would be checking in not only your source code, but your SQL scripts.
When you have a "build" you will also pickup the correct version of database scripts.
Also, you should have a Build / Deploy box where you can Create or Upgrade databases anytime for QA testing. It is always a good test to be able to take a backup of your client's database and run the upgrade scripts to prove them out.
The only version control system I've used is Subversion. If you are not familiar with them, check them out and setup a proper team development environment, you will save yourself lots of headaches.
Just my 2 cents.
David
|
|
|
|
|
General concept
- Database itself has a version
- Wrap the changes into an application that applies those to the database
- Use a table in the database that tracks the current version.
An existing tool for this is liquibase. Seemed decent when I used it.
You can roll your own as well.
And additional feature that I consider essential but it adds complexity is that the application also tracks the version of the database it expects. If the database is the wrong version (table above) then the application will exit on start up.
|
|
|
|
|
We use Red-Gate SQL tools against the UAT server. We also make a strong use of schemas, common objects use dbo, specific object use a different schema, a developer generally works on one schema at a time and dbo changes are discussed before implementing.
Scripts are run against the UAT server using SQL Compare. Master table data is synched via SQL Data Compare.
PS I do not work for Red-Gate but have been using their paid for tool set well over a decade.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
This is how I've done it for years....right after login, the application compares db version to app version, applies changes (logic in the app itself) if needed. If the version changed, the db version is updated to reflect that. In addition, I also use a special database table that records schema changes as they are made.
If it happens that the database version is higher than the app, the app prompts for an update. If the user declines, the app complains and exits.
btw, comparing version numbers is much easier if you convert them to long ints using something like this:
intAppVersion = (major * 100000) + (minor * 1000) + revision
This also makes it easier to run a loop through db version checks until you hit the current app version.
"Go forth into the source" - Neal Morse
|
|
|
|
|
That sounds very similar to Entity Framework Code First Migrations[^].
However, I don't think that's a good approach. Apart from your application needing to check the database version every time it connects, it also has to connect as a user which has permission to modify the structure of the database. That always seems like a violation of the principle of least privilege[^] to me. 99% of the time, your application doesn't need to modify the database structure, so it shouldn't have permission to do so.
You might be able to work around that by using a second login with the elevated permissions. But in most cases, I think it's easier to move the database upgrade code outside of the application, and manage it as part of the upgrade process.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: That sounds very similar to Entity Framework Code First Migrations
Maybe they got it from me? I've been using this approach since about Y2K.
You do have some good points regarding principle of least privilege, and I wholeheartedly agree with that philosophy/architecture for some applications...most of the ones I deal with demand simplicity and the ability for an end user to install a module without help from IT, hence dbo.
"Go forth into the source" - Neal Morse
|
|
|
|
|