|
|
Is it possible to use your admin Harry Potter powers to move the entire thread there? I can't seem to be able to do that...please.
|
|
|
|
|
Sorry, I'm not an administrator.
|
|
|
|
|
Don't worry, think of it as a suggestion for the future.
|
|
|
|
|
Wat, are you kidding, the Lounge is exactly the place for a good rant about the tech we have to use, I suggest you clarify that with the OP.
[edit] oops I thought I was in the Lounge I'll just remove that foot![/edit]
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
No worries, I've done it myself ...
|
|
|
|
|
In 2005, I inherited a project which made use of NHibernate. Wow, did that look fancy! How easy you could add data to a datagrid and move in along foreign key relationships...
But there was a catch (no, it did not return 22): every foreign key relationship was treated with referential integrity by NHibernate, even when there was no referential integrity designed into the database. And working in a really-existing company of today's capitalism, reality did not always match theory (that's way we could not use the referential integrity on the database). We had big problems when data had to be inserted into a "dependent" table while the corresponding data in the "master" table were not yet present...
With that experience, I still do avoid ORM, though I believe things might be less cruel nowadays.
|
|
|
|
|
I've been messing with this for hours today. I'm trying to do this in 1 call to the database. Normally I would just get a list of unique items first, and then go back and get the numbers. Perhaps I'm trying to do something that can't be done, or it's just beyond my knowledge level, I'll take the latter of thought.
I need to get the qty sum and records of a sales history file
Item | qty | custNum | date
09-170 | 5 | 0014 | 01/20/1990
09-170 | 24 | 0014 | 02/21/1991
17-209 | 24 | 0014 | 03/15/1997
17-209 | 24 | 0014 | 08/15/2007
Results should be
Item | qty | desc | avg price
09-170 | 29 | trowel | 3.00
17-209 | 48 | kneepad | 24.00
So I wrote this to get the count of unique items which is 2. I'm not sure if my code returns the right number or not, because the database file contains thousands of records. I get 14, not reflective of the example above.
SELECT COUNT(FITEMNO) AS cCount
FROM
(
SELECT
FITEMNO
FROM ARTRS01.dbf
WHERE FCUSTNO=@FCUSTNO
GROUP BY FITEMNO
)
UNION ALL
SELECT COUNT(FITEMNO) AS hCount
FROM
(
SELECT
FITEMNO
FROM ARTRS01H.dbf
WHERE FCUSTNO=@FCUSTNO
GROUP BY FITEMNO
)
I wrote this to get the records, in which I get 36. I'm just testing on the history file, so there is no union to join the 2 database files yet. I keep getting 36 records with duplicates instead of 14 unique records.
SELECT FITEMNO, FSHIPQTY AS hItems"
FROM
(
SELECT h.FITEMNO, h.FSHIPQTY FROM ARTRS01H.dbf h WHERE h.FCUSTNO=@FCUSTNO GROUP BY FITEMNO, FSHIPQTY
)
|
|
|
|
|
If you want the distinct count of FITEMO use:
SELECT COUNT(distinct FITEMNO)
FROM ARTRS01.dbf
WHERE FCUSTNO=@FCUSTNO
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I tried that at first but got an error
(missing operator in query expression 'COUNT(distinct FITEMNO)
And did research and ended up with the example in my post.
So I thought it was too advanced for the old foxpro or it was a OLEDB thing
|
|
|
|
|
I must be missing something here.
Why can't you just:
SELECT item,sum(qty) as qty
FROM MyTable
GROUP BY item
|
|
|
|
|
I ended up doing something similar. I wrote one function to get the distinct items, and went back and got the sums with the distinct item list. I don't know what I was thinking, was trying to do it all in one shot.
" SELECT " & _
" SUM(FSHIPQTY) " & _
", SUM(FAMOUNT) " & _
" FROM ARTRS01H.dbf " & _
" WHERE FCUSTNO=@FCUSTNO " & _
" AND FITEMNO=@FITEMNO "
|
|
|
|
|
SELECT h.FITEMNO, SUM(h.FSHIPQTY) AS TOTAL_QTY FROM ARTRS01H.dbf h WHERE h.FCUSTNO=@FCUSTNO GROUP BY FITEMNO
|
|
|
|
|
I finally got it in 1 shot. Runs super fast now.
Customer complained about the 5 minute run time, so I took another stab at it.
Don't know why it I got it this time, perhaps the nap time and the beers!
SELECT
DISTINCT v.FITEMNO
, SUM(v.FSHIPQTY)
, SUM(v.FSHIPQTY * v.FPRICE)
, (SELECT FDESCRIPT FROM ICITM01.dbf WHERE FITEMNO=v.FITEMNO) AS FREALDESC
FROM ARTRS01H.dbf v
WHERE FCUSTNO=@FCUSTNO
GROUP BY v.FITEMNO
|
|
|
|
|
Cannot connect to PR\R.
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No connection could be made because the target machine actively refused it.) (Microsoft SQL Server, Error: 10061)
|
|
|
|
|
Simply the error means that the machine you try to connect is exists but no SQL server (service) can be found on it...
1. check that the machine name/ip address is the right one
2. SQL installed as named instance? In that case you may add the instance name to your address
3. SQL using the default port (1433) or it installed with a different one?
4. You may have a firewall between you and the SQL, check it and open ports as needed...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
With SQL people often say you shouldn't do 'SELECT *'. I tend to write highly optimized and selective queries. Do others do selective queries or do you think this is not necessary? The benefit will obviously vary depending on the size and usage of the table. I'm considering simplifying my architecture by doing all of the SELECTing on the web server. There will always be special cases eg. massive tables, but everything is fairly small in this particular application.
There are two main benefits I can see from selective queries, less data is transferred and covered indexes can be much smaller. I'm not sure that the amount of data makes much difference when we'll only be getting one screen (eg. 10-50 records) of data at a time.
It would be great to hear what others do and think.
|
|
|
|
|
You are doing it the right way.
|
|
|
|
|
If you want all the columns, then I see no problem with using * . The primary issue is when you use * even when you want only a few columns, and some of the unneeded columns contain large data. This can also happen when a new column is added.
Additionally, there may be times when a column is removed or renamed -- this will likely cause a problem, but do you want the problem to be reported when the data is queried or farther downstream? Early detection is probably better.
Someone here (other than me) wrote a good rant against * some years back, but I'm having trouble finding it.
Edit:
SQL Server DO's and DONT's[^]
SQL Wizardry Part 2 - Select, beyond the basics[^]
You'll never get very far if all you do is follow instructions.
modified 13-Jun-14 13:37pm.
|
|
|
|
|
To add to the other comments, using 'select *' can also cause issues in columns are added or the order rearranged. If your application is expecting data in a particular column, then it may not be there; if your application is not expecting the columns that have been added, why bother spending the effort to retrieve the data and parse out the unwanted column?
|
|
|
|
|
Member 4487083 wrote: With SQL people often say you shouldn't do 'SELECT *'. I also tell people not to run their stupid queries without starting a transaction that can be safely rolled back.
Again, you DON'T do a SELECT * . It's not that you save a lot by omitting a DateTime column - but it would prevent that blob-field of 2Gb each that was added last month to be pulled over the network with each and every friggin' request, killing the network and the database-server. Or a nice calculated field that cripples the DB-server.
It doesn't take much time, and makes the application a bit more robust. Makes it easier for me to debug when I get thrown in your team as a maintenance-programmer.
Yes, it takes extra time, but it has a good ROI. It's not a religious thing - I won't go medievel if you do a simple "SELECT * FROM". Still, if you do it in a query that contains several joins you'll get this lecture, as each extra table means another chance at pulling columns you don't need.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Again, you DON'T do a SELECT * . It's not that you save a lot by omitting a DateTime column - but it would prevent that blob-field of 2Gb each that was added last month to be pulled over the network with each and every friggin' request
That wouldn't be an issue in my case. I would never store that kind of stuff in this particular database as it's in Azure and would cost me a a lot of money. Something large would go in a blob or somewhere else.
|
|
|
|
|
Member 4487083 wrote: There will always be special cases eg. massive tables, but everything is fairly small in this particular application.
If there were in fact many columns and you only want a couple then you should do that regardless of any other consideration.
However I do it because there is no guarantee to ordering with '*' and that can matter in a variety of ways such as when using external APIs, dumping data, etc.
|
|
|
|
|
jschell wrote: However I do it because there is no guarantee to ordering with '*' and that can matter in a variety of ways such as when using external APIs, dumping data, etc.
I actually use an ORM so it doesn't use * but lists every column (same thing from a coding point of view).
|
|
|
|
|
Like you I use and ORM (I think all of us do), however I never use *, always explicitly list the columns, just because it is good discipline . I also work with small datasets and almost never store blob/binary.
Never underestimate the power of human stupidity
RAH
|
|
|
|