|
If you want the distinct count of FITEMO use:
SELECT COUNT(distinct FITEMNO)
FROM ARTRS01.dbf
WHERE FCUSTNO=@FCUSTNO
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I tried that at first but got an error
(missing operator in query expression 'COUNT(distinct FITEMNO)
And did research and ended up with the example in my post.
So I thought it was too advanced for the old foxpro or it was a OLEDB thing
|
|
|
|
|
I must be missing something here.
Why can't you just:
SELECT item,sum(qty) as qty
FROM MyTable
GROUP BY item
|
|
|
|
|
I ended up doing something similar. I wrote one function to get the distinct items, and went back and got the sums with the distinct item list. I don't know what I was thinking, was trying to do it all in one shot.
" SELECT " & _
" SUM(FSHIPQTY) " & _
", SUM(FAMOUNT) " & _
" FROM ARTRS01H.dbf " & _
" WHERE FCUSTNO=@FCUSTNO " & _
" AND FITEMNO=@FITEMNO "
|
|
|
|
|
SELECT h.FITEMNO, SUM(h.FSHIPQTY) AS TOTAL_QTY FROM ARTRS01H.dbf h WHERE h.FCUSTNO=@FCUSTNO GROUP BY FITEMNO
|
|
|
|
|
I finally got it in 1 shot. Runs super fast now.
Customer complained about the 5 minute run time, so I took another stab at it.
Don't know why it I got it this time, perhaps the nap time and the beers!
SELECT
DISTINCT v.FITEMNO
, SUM(v.FSHIPQTY)
, SUM(v.FSHIPQTY * v.FPRICE)
, (SELECT FDESCRIPT FROM ICITM01.dbf WHERE FITEMNO=v.FITEMNO) AS FREALDESC
FROM ARTRS01H.dbf v
WHERE FCUSTNO=@FCUSTNO
GROUP BY v.FITEMNO
|
|
|
|
|
Cannot connect to PR\R.
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No connection could be made because the target machine actively refused it.) (Microsoft SQL Server, Error: 10061)
|
|
|
|
|
Simply the error means that the machine you try to connect is exists but no SQL server (service) can be found on it...
1. check that the machine name/ip address is the right one
2. SQL installed as named instance? In that case you may add the instance name to your address
3. SQL using the default port (1433) or it installed with a different one?
4. You may have a firewall between you and the SQL, check it and open ports as needed...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
With SQL people often say you shouldn't do 'SELECT *'. I tend to write highly optimized and selective queries. Do others do selective queries or do you think this is not necessary? The benefit will obviously vary depending on the size and usage of the table. I'm considering simplifying my architecture by doing all of the SELECTing on the web server. There will always be special cases eg. massive tables, but everything is fairly small in this particular application.
There are two main benefits I can see from selective queries, less data is transferred and covered indexes can be much smaller. I'm not sure that the amount of data makes much difference when we'll only be getting one screen (eg. 10-50 records) of data at a time.
It would be great to hear what others do and think.
|
|
|
|
|
You are doing it the right way.
|
|
|
|
|
If you want all the columns, then I see no problem with using * . The primary issue is when you use * even when you want only a few columns, and some of the unneeded columns contain large data. This can also happen when a new column is added.
Additionally, there may be times when a column is removed or renamed -- this will likely cause a problem, but do you want the problem to be reported when the data is queried or farther downstream? Early detection is probably better.
Someone here (other than me) wrote a good rant against * some years back, but I'm having trouble finding it.
Edit:
SQL Server DO's and DONT's[^]
SQL Wizardry Part 2 - Select, beyond the basics[^]
You'll never get very far if all you do is follow instructions.
modified 13-Jun-14 13:37pm.
|
|
|
|
|
To add to the other comments, using 'select *' can also cause issues in columns are added or the order rearranged. If your application is expecting data in a particular column, then it may not be there; if your application is not expecting the columns that have been added, why bother spending the effort to retrieve the data and parse out the unwanted column?
|
|
|
|
|
Member 4487083 wrote: With SQL people often say you shouldn't do 'SELECT *'. I also tell people not to run their stupid queries without starting a transaction that can be safely rolled back.
Again, you DON'T do a SELECT * . It's not that you save a lot by omitting a DateTime column - but it would prevent that blob-field of 2Gb each that was added last month to be pulled over the network with each and every friggin' request, killing the network and the database-server. Or a nice calculated field that cripples the DB-server.
It doesn't take much time, and makes the application a bit more robust. Makes it easier for me to debug when I get thrown in your team as a maintenance-programmer.
Yes, it takes extra time, but it has a good ROI. It's not a religious thing - I won't go medievel if you do a simple "SELECT * FROM". Still, if you do it in a query that contains several joins you'll get this lecture, as each extra table means another chance at pulling columns you don't need.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Again, you DON'T do a SELECT * . It's not that you save a lot by omitting a DateTime column - but it would prevent that blob-field of 2Gb each that was added last month to be pulled over the network with each and every friggin' request
That wouldn't be an issue in my case. I would never store that kind of stuff in this particular database as it's in Azure and would cost me a a lot of money. Something large would go in a blob or somewhere else.
|
|
|
|
|
Member 4487083 wrote: There will always be special cases eg. massive tables, but everything is fairly small in this particular application.
If there were in fact many columns and you only want a couple then you should do that regardless of any other consideration.
However I do it because there is no guarantee to ordering with '*' and that can matter in a variety of ways such as when using external APIs, dumping data, etc.
|
|
|
|
|
jschell wrote: However I do it because there is no guarantee to ordering with '*' and that can matter in a variety of ways such as when using external APIs, dumping data, etc.
I actually use an ORM so it doesn't use * but lists every column (same thing from a coding point of view).
|
|
|
|
|
Like you I use and ORM (I think all of us do), however I never use *, always explicitly list the columns, just because it is good discipline . I also work with small datasets and almost never store blob/binary.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
To add to the other comments. If you use named columns the possibility of using a covering index increases.
Here[^] is some recommended reading on the subject.
|
|
|
|
|
Just remembered why else I don't like 'Select *'.
I had to debug a stored procedure that was failing on a union because the writer had a 'Select *' on a table from a database unioned with a 'Select *' on an archival version of the table from another database.
When the vendor the system was purchased from updated the production table to add a column, the union blew up because the tables didn't match, and, the added column was not necessary for the archival copy.
|
|
|
|
|
The comments have been useful, and pretty much match my thoughts. I was starting to think that maybe my ideas were a little out of date. NOSQL is moving more towards denormalization, which goes against a lot of what we do with relational databases.
At the time of writing the post, I hadn't put much thought into joins which will have the biggest problems. I'm looking at some other ways to simplify my architecture without sacrificing best practices and performance.
|
|
|
|
|
I've been looking at repository pattern implementations (using .Net and EF). Many of them aren't selective.
One of the most common methods is GetByID. This gets the whole record. Is this really bad? It will return one record, and in most systems it'll use the primary key.
One of the most shocking things I have seen is List<myentity> GetAll(). This returns ALL records in a table. This is clearly going to become an issue in large systems.
Another way some systems work is by using DDD and having aggregates return all related data. This tends to make the code simple, but I have doubts about scalability. Has anybody got experience with a system like this?
|
|
|
|
|
I'm back to the old foxpro databases using OLEDB.
I'm suppose to pull all the items in a Sales Orders, and go back into the history database file, and get the total QYT and AMOUNT purchased in the past as well.
I can do the math, but wow it's slow!, and I have an issue at the end with formatting the number with 2 decimal places for the cents. So I'm not sure if I need to fix it in the database call, or try and fix it when presenting the number in the PDF.
So $85.30 appears as $85.3
In the past I have had trouble, but not for years since.
The other one i.FPRICE works fine, so I'm scratching my head on this one.
Here's what I have
SELECT
i.FCUSTNO
, i.FORDQTY
, i.FITEMNO
, i.FDESCRIPT
, i.FPRICE
, (SELECT SUM(s.FSHIPQTY) FROM ARTRS01H.dbf s WHERE s.FCUSTNO = i.FCUSTNO AND s.FITEMNO = i.FITEMNO) AS FSHIPQTY
, (SELECT SUM(s.FAMOUNT) FROM ARTRS01H.dbf s WHERE s.FCUSTNO = i.FCUSTNO AND s.FITEMNO = i.FITEMNO) AS FAMOUNT
FROM SOTRS01.dbf i
WHERE i.FSONO=@FSONO
The reader
If Not (reader.IsDBNull(5)) Then sIRi(rC).FATD_SHIPQTY = reader.GetValue(5)
If Not (reader.IsDBNull(6)) Then sIRi(rC).FATD_TOTAL = reader.GetValue(6)
And in ASP.Net to create a PDF of the Order Confirmation
Dim lbl_item_FATD_TOTAL As Label
lbl_item_FATD_TOTAL = New Label("", 373, currentY, 45, 12, Font.Courier, 9, TextAlign.Right)
lbl_item_FATD_TOTAL.Text = String.Format("{0:c}", sSOI(idx).FATD_TOTAL)
currentPage.Elements.Add(lbl_item_FATD_TOTAL)
Any Suggestions would be cool.
|
|
|
|
|
jkirkerx wrote: Any Suggestions would be cool. It's an unformatted decimal/double (as should be), until you convert it to a string. Below gives a predictable result:
Console.WriteLine(String.Format("{0:c}", 3.8));
Console.ReadLine();
There could be two things wrong here - either your formatting isn't applied, or it is formatting correctly. If your formatting is applied, then check out the regional settings in the configuration - if your end-user behaves like I do, then there'll be a non-default value.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I just figured it out!
It was the PDF generator, in which the number was being formatted correctly, but the space I allocated for the data was not wide enough, so somehow it truncated the decimal values after the .
That was a head scratcher that I spent hours on this morning.
But thanks for the response and help, appreciate it.
|
|
|
|
|
I have xml files with name space references. If I strip the namespace references from I file, I can read the elements and attributes using xQuery. However, with the namespace references in place, I get no results. I can't figure out how to add the references into the xQuery call.
XML
<clinicaldocument xmlns="urn:hl7-org:v3" xmlns:voc="urn:hl7-org:v3/voc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<recordtarget>
<patientrole>
<name>
<birthtime value="00000000">
SQL
SELECT
x.value('(patientRole/patient/birthTime/@value)[1]','varchar(50)') as birthDate
FROM @XML.nodes(
'declare namespace xlmns="urn:hl7-org:v3";
declare namespace xlmns:voc="urn:hl7-org:v3/voc";
declare namespace xsi="http://www.w3.org/2001/XMLSchema-instance";
/ClinicalDocument/recordTarget') as Addr (x)
|
|
|
|