Person A comes along to one of my client's employees and gives his info and a certain address.
Person B comes along to another one of my client's employees, gives his info and the same address as person A.
here is what currently happens in my app:
What you have described is a hypothetical implementation scenario.
However what is the exact business case where this happens? Not what might happen but what does the actual people of this company using the application do?
Nico Haegens wrote:
You don't have a problem until there is a business case.
For example given your description a 'customer' could in fact have the very same address as a different 'customer' and your current implementation would prevent you from entering it (regardless of why.) That can happen because a person might be legally incorporated as different companies yet work out of the same office (one person after all.) But as a legal entity the address in not in fact the 'same' despite being the same physical location.
Welcome to the "address" can of worms, there is another relating to phone numbers and names (marry/divorce) and surname/first name designation (try working in Asia).
The technical answer to your problem is that you MUST include a search for existing address in your insert procedure.
Defining the business case is going to drive you nuts, toss it back to the business/BA, their job is to tell you what you want. It can be entertaining watching the reaction when you start asking some of the questions you have received here.
Never underestimate the power of human stupidity
Figured it out myself:
use try catch clause and check for error_number()=2627
declare@newidbigint = next value for dbo.baseidseq
insertinto [Address](id, street, streetnumber, busnumber, placeid)
values(@newid, @street, @streetnumber, @busnumber, @placeid)
beginset@newid = (select baseid from [Address]
where street like@streetand streetnumber like@streetnumberand busnumber like@busnumberand placeid=@placeid)
Thanks for your reply. In my case here, a customer was requesting a Purchase Order in pdf format, as well as Excel format. I was doing the rounding with a formula in Excel, but the formula did not always fill down to all the rows of the table which is why I decided to do the rounding in the query.
Also, quite often, in Winform apps, I pull a query into a DataTable, and set the DataTable as a DataGridView DataSource, in which case it seems more practical to do the rounding in the query as opposed to looping through the DataTable, and adding the rows with the rounded value to the DataGridView?
The problem is not a ROUND function, but the precision of the FLOAT data type (Using decimal, float, and real Data[^]). You have at least a couple of options:
1. Use DECIMAL or NUMERIC instead of the FLOAT.
2. CAST to DECIMAL for the calculation.
Here's the demonstration for you (using your first example): 1. Select using ROUND.
Tested below with the same random sample of 300 values, and all agreed with the Excel Values!
, price*qty as Val
, round(CONVERT(decimal(12,4),price)* CONVERT(decimal(12,4),qty),3) as RoundVal
What mis-lead me into thinking the problem was with the type of rounding was that from my sample of 300 values, all the values that differed had a 5 in the 4th decimal place, and all the errors were not rounded as expected.
In the link you poseted:
Using float and real Data
The float and real data types are known as approximate data types. The behavior of float and real follows the IEEE 754 specification on approximate numeric data types.
The IEEE 754 specification provides four rounding modes: round to nearest, round up, round down, and round to zero. Microsoft SQL Server uses round up. All are accurate to the guaranteed precision but can result in slightly different floating-point values. Because the binary representation of a floating-point number may use one of many legal rounding schemes, it is impossible to reliably quantify a floating-point value.
My Table has millions of records and my query is fetching 500 million records for 1 day to the SSRS report. so, It is taking too long time to fetch the data. Could anybody can suggest me how to narrow down or improve my query.
We have a relational DB in SQL Server 2008 which grinds up monthly data sets based on a set of criteria, these runs can take between 2 and 4 hours to produce and there may be 10-12 per month in 3-4 run sets. I am proposing that each run set has a cube for reporting and analysis.
The results are stored in 2 (fact) tables, each table has a view which pulls in the dimension fields, date, branch, product etc.
Each Run can result in +7m rows
Q1 Should I replicate the data into a reporting database before building the cube or build direct from the relational DB?
Q2 It has been recommended that I use the views, do I also need to pull in the dimension tables or can I use the fields in the view? eg Product dimension is serviced by a distinct of the Product column (answer - use the dimension table for cases where there is no record for a dimension element this period ).
Never underestimate the power of human stupidity
Last Visit: 31-Dec-99 18:00 Last Update: 4-Oct-23 4:31