|
The last app I worked on was a shrink wrap CRM application that allowed customers to add their own business objects and tables. For that we had to generate SQL dynamically. There are pros and cons to any approach. When I'm designing a basic application, I usually start off with stored procedures just because I like their simplicity.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Speed: You only pass the SP parameters and it pass back only the result and not the full table.
|
|
|
|
|
A Wong wrote: You only pass the SP parameters and it pass back only the result and not the full table
Who said they were passing full tables? If you perform a query on a table you only get back what you asked for. You never get back the full table unless that is explicitly what was requested.
|
|
|
|
|
Colin Angus Mackay wrote: Who said they were passing full tables?
Well he/she said:
What is the benifit i get using stored procedure instead of tables in my web application using asp.net with c#
From what I understand of the question, he/she is pulling in the tables and splitting out the information he/she wants in C#/asp.net. Of course, I could be wrong in my interpretation as the question is not quite solid.
|
|
|
|
|
A Wong wrote: From what I understand of the question
Well, you might be right. But who in their right mind would do that?
|
|
|
|
|
Convenience, if you use stored procedures, you are embedding a catalog of the sql queries used on the database with the database. It can make it more convenient to add indexes for those queries at a later point without having to search through the text in your profiler output.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Not shure I should ask this here but don't really know a better forum for it so here goes
Since about 4 hours now I'm (suddenly) unable to connect to my company's sql-server
The errormessage: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified
I tried everything I could find on the net (using google) but nothing works
I checked the server and the client protocols. All are enabled and match.
Using named pipes or tcp/ip doesn't change anything (same error)
My connectionstring:
"Data Source=<servername\instancename>;Initial Catalog=<dbname>;User ID=<id>;Password=<pass>"
The connectionstrings I tryed:
"Data Source=tcp:<servername\instancename>,1433;Initial Catalog=<dbname>;User ID=<id>;Password=<pass>"
"Data Source=np:<servername\instancename>;Initial Catalog=<dbname>;User ID=<id>;Password=<pass>"
I'm still able to ping to the server and even open network shares (read/write to the networkshare)
I tryed making an dns connection but that fails aswell
The only thing that changed is that the client was inserted into the domain yesterday but I was still able to connect this morning and at the moment I'm still logging in localy.
Otherwise nothing changed on the server or the client (no updates, no new software, ...)
There is no firewall enable on the server or the client.
The really strange thing is that when I started my programme at 12.25 everything worked perfectly but when I started it 2 min later I started to get this error.
In those 2 min nothing happend (I didn't even change any code)
The client logs show nothing for that moment.
I'm at a lost to what it might be.
Any help would be appriciated
EDIT:
the problem appears to have solved itself
I just tryed to connect again and now I have no problems anymore
If anyone has any suggestions to what it might have been please tell me because I hate it when things like this happen
modified on Tuesday, June 24, 2008 10:25 AM
|
|
|
|
|
Hi,
iam using "Firebird Isql" Command Promt Database..
iam searching for Export and Import options..I find "FBExport" tool..
now my problem is how to use "FBExport" tool...
i download files which are available in SourceForge.net,and extract the files and run exe
it's just saving the path...
please inform how can i use this "FBExport" tool...
murali krishna
|
|
|
|
|
I already replied to this, suggesting you contact the product vendor. Nobody else replied, either because they couldn't suggest anything better or they had no idea. Posting the same question again (with only 1 other question inbetween) will not help.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
How i can Register new sql server in enterprise edition.
|
|
|
|
|
Menu View>registered Servers then Right Click New>server registration
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Hi,
iam using "Firebird Isql" Command Promt Database..
iam searching for Export and Import options..I find "FBExport" tool..
now my problem is how to use "FBExport" tool...
i download files which are available in SourceForge.net,and extract the files and run exe
it's just saving the path...
please inform how can i use this "FBExport" tool...
murali krishna
|
|
|
|
|
Have you tried the Firebird ISQL web site? they probably have more FAQs - it is their product after all
Bob
Ashfield Consultants Ltd
|
|
|
|
|
If I had three tables (the 3T) called Building, Floor, and Room (the data is what you'd probably expect) and I wanted to be able to manage inactive periods (or permanent shut-downs) of any one record in any of those tables, what would be the best way to do it?
My first thought (which doesn't seem elegant) was to create an inactiveId field in the 3T, referencing an Inactive table which would store the beginDate, endDate (if any), and inactivityReason. Now consider a floor is scheduled to undergo renovations from January 10th to February 10th and one of the rooms of that same floor is scheduled for a permanent installation of WiFi and AC outlets for every desk from January 2nd to January 29th.
Should I store the fields buildingId, floorId, and roomId in the Inactive table or is the inactiveId in the 3T enough? Should I create a trigger (or a scheduled function) to remove any "expired" inactiveId from the 3T (i.e. the affected room would have a null inactiveId on January 30th and the affected floor would follow suit on February 11th)?
If the above is completely wrongheaded, please let me know. I'm not a dba, just a lowly programmer. And, since I want to design it to be easy to use for future programmers, I don't need everything to be strictly normalized if there's a usability payoff for the end user (although my instincts are to usually go up to 3NF and 2NF).
Alex
|
|
|
|
|
A forth table with a table type, id field, inactive start, inactive end date, and reason code where the type and id are the primary key will give you a good starting point. It is better design to create three tables but if done right the one will serve you well.
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
If this is the main aspect for the design of the database, I would create the 3T primary keys the same way I would if it was an accounting system. A character key with the first 3 digits designating building, next 3 floor and the last three room (altered to your specs of course). This way you could include ranges of any size for your shutdowns without having to create multiple records for each structure.
It would come in very handy when developing a calandar of building projects. I usually try to avoid having any real meaning (besides row identifier) in my primary keys but, in this case it may make sense.
Building 5: 005000000
Second Floor, blg 5: 005002000
Room 234, 2nd Fr, blg 5: 005002234
Inactive Table
--------------
PrimaryKey
StartStructureId
EndStructureId
StartDate
EndDate
Reason
|
|
|
|
|
I would create the additional table on Room with an ID and from to date. If you deactivate a floor then create a record for each room on the floor. Please, do not prefix your ID field with any intelligence - it is a fundamentally bad design.
The reason for the room level table is to give you the most flexibility without inflicting a complex set of rules (around the level you want to disable). You could then create views that would count the number of disabled rooms and compare it to the floor, room count to identify a disabled floor/building.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Ok, I would set this up and try it but I don't have the equipment handy.
What would you say the TPS on SQL Server 2005 on a 64 bit Windows Server would be for the following scenario:
Table 1: Each record is 1k
Table 2: An audit table matching access records to Table 1, each record is 1k
All either inserts or selects? The other data in the other tables can be moved to another server if necessary. Anyone one with experience with a dedicated SQL Server machine that can tell me what I can expect as average TPS performance, on let's say a 8-core machine with 64 gigs of ram, and scsi HDD of sufficient size. (Assuming the machine specs are realistic)
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Your performance will be I/O bound.
It seems to be very hard for programmers to understand this, but RAM and processor cores have very little to do with database server performance when writing. The bounding factor for writes (inserts, updates, deletes) is how fast the transaction log records can be written out to the log file. Transaction log writes are ideally sequential and are fastest when the drive is dedicated to the transaction log, as the disk head will always remain in approximately the right position to write the next batch of log records. Putting the data on the same disk as the log will cause dirty page flushes to be interleaved with log writes, causing the log writes to be held up while the head moves to and from the data pages.
It's best if you do have dedicated drives to preallocate the files to close to the capacity of the drive (array), and ensure each file is contiguous. There is a benefit to sequential I/O but file fragmentation will turn what looks sequential to SQL Server into random-access on the drive itself. File-grow operations are very expensive.
You should consider the capacity of the drives in terms of I/Os per second, not in terms of raw gigabytes writeable. Unfortunately you won't see this quoted for drives as it depends so much on the workload and pattern of access. SQL Server does batch up log records and will write them in batches of anything from 512 bytes up to 16kB and possibly even more, but remember that the log records for a transaction must be written fully to disk before SQL Server will consider the transaction committed (the Durable property, the D of the ACID properties).
In terms of selecting data, that depends on how you access it. SQL Server will cache data in RAM when it is accessed (which will probably be randomly, one 8KB page at a time as it's required but SQL Server also does 'read ahead' to perform larger I/Os in one command). Ideally you want an index that covers all of the terms in your WHERE clause. The query plan will show this as an 'index seek' (clustered index seek for the clustered index). However, with sufficient RAM, and the data already cached in RAM, you may not notice that the query is in fact simply reading every row from start to finish, shown as 'table scan' or (clustered) index scan. This will appear as 100% CPU, but adding more or faster CPUs generally won't make it go any faster and it'll still show 100% CPU: what you're actually seeing is the processor bringing the data from main memory into its caches, which is bound by the CPU and/or memory bus speeds. The OS can't account for how many CPU cycles were burnt retrieving data through the cache hierarchy, it just sees them as cycles consumed by this thread. In contrast when retrieving data from disk, it knows the thread can't be scheduled. (SQL Server actually uses asynchronous I/O, it puts a thread waiting for I/O onto another request so it doesn't incur the OS scheduling overhead, unless there's no more work to do.)
On a lightly-loaded multiprocessor/multicore system, you might also see some parallelism being invoked so each core processes one part of the data set, if the table is already cached in RAM. Don't be fooled by this: once you have a lot of concurrent transactions, the other cores will be busy processing those requests and you'll see very little parallelism.
Best case, you can get very high SELECT performance if the query uses a highly selective index (so that one unique row will be found) and the data is already cached in RAM: it will only need to follow a few page references before locating the actual row. Worst case is a non-clustered index scan of a large table not currently in memory or that won't fit in memory. (Non-clustered index lookups require a clustered index or bookmark lookup on the base table if you've SELECTed a column that doesn't appear in the non-clustered index.)
All you can really do is try to provide a representative data set and workload for the system and try it out.
DoEvents: Generating unexpected recursion since 1991
|
|
|
|
|
And as I don't have the machine I am still left with my question. I suppose I can rephrase for clarity: Estimated TPS on a 5 grand machine for the given simple table structure. I am hoping to get at least 7 records per data page to make life simple.
The key structure is actually fairly complicated making the index question too hard to ask for such a simple task so for brevity assume simple, 64 bit integer keys randomly distributed and an equal number of inserts:selects. (my scenario will involve a clustered and a non-clustered index) and millions of records, perhaps more.
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Mike's answer is correct. You really need more info to even estimate the answer. What's the sequential IO rate of the transaction log drive for the database, and is it dedicated to the database or shared between databases? Are the log files on a dedicated drive? As far as reads go, how spread out do you expect the queries to be so that the most queried results will be cached in memory or will the queries be spread out making reads more IO bound?
The last time I speced out a high performance database server, most of the money went into a 20 drive raid pod and memory for read cache processor wasn't much of an issue, but I haven't really looked at any of that stuff since SANs have become more common.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Is it really so hard to either not reply to my question or to give rough estimates?
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Sorry, I would have found my response useful, the same with Mike's. I have 20 orange trees how many apples can I harvest?
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
My question was more akin to I have 20 acres and a decent tractor how many bales of hay could I expect to harvest. I can pick up the phone right now and get many answers to that question including ranges to include many factors. Yet for some reason there seems to be a desire for people to make posts which don't offer substantiative support. If you don't have an answer or a range and your response is the exact same as someone else's, why bother with it as it certainly doesn't help me out. If I wanted the answer, "It depends on the exact configuration used" I would never have asked the question.
Again, not to point out the obvious, but unless you think I am a complete moron what possible help do you think your post or Mike's offered me?
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
It should tell you that you need to get some more information. For instance you could run some transactions through your dev database note the number of ios generated per transaction and be able get a rough estimate based on the number of ios that your server can handle.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|