|
Hi there,
I am an e-commerce website. I don't know how to architect the database and I am hoping for some input. The database has not been done yet.
I have 2 web solutions (2 different .net projects). The code is exactly the same. I have one website hosted in Country A for Country A, and I have the other website hosted in Country B only for country B. Reason for this is I want faster download rates.
Ideally I would like to have 2 seperate databases, one for Country A and the other for Country B, so this means that I will have to add the products twice. Some products only display in Country A and some products only display in Country B. I was thinking of having a master database that contains the products and then some how "get" the products to Country A and to Country B so that doanload rates are quicker. I want my users to be able to use both websites, so they can login in on any of the websites.
I hope I am making sense?? If you have any interesting articles please let me know.
Thanks
|
|
|
|
|
If you structure the tables correctly, you will be able to contain it in a single database.
Use a single user/login table(s) for both sites, this way you wont have to import anything if you kept the site's databases different.
If product's are a one to one relationship to countries (one product can be sold in one county), then you can have a product data set with a column for the country ID. Have a table with the countries (A and B in your example), then join the products and country.
If one product can belong to both countries, then you can have a product table with country "flags" (a bit column with 1 indicating yes and 0 indicating no). So you might have productid = 1, country A = 1, and country B = 1 meaning that the product is available in both countries. If you do expand, then you will have to keep adding countries. To avoid that, you can have a "mapping" table where you map single products to multiple countries (being a true one to many build).
I'm assuming that both sites will use a common database server.
|
|
|
|
|
I will be using SQL Server 2005. I can host the database in Country A, but then downloads times for Country B will be slower? A couple of seconds is big deal for me.
|
|
|
|
|
.NET Enthusiast wrote: I am an e-commerce website.
Oh wow! Do I see Artificial Intelligence? A website posting question in a forum to re-program itself???
All developers have just been made redundant
I'm way behind in technology. Maybe it's time to retire.
As for your problem,
Have you looked into replication. If you have a good internet link between your two databases, replication will be almost instantaneous for tables with small volume of changes.
SG
Cause is effect concealed. Effect is cause revealed.
|
|
|
|
|
i was undergoing a problem with a task given to me the task is acess database and bind to the datagrid using c# in ado.net ihave the whole code with write logic but while displaying i am not unable to display dataset on the datagrid.well i am geting the dataset while debuging while applying "add watch" but unable to get it on the datagrid can u plz help me out.
|
|
|
|
|
|
Hi,
the code bellow works just fine but as you can see it´s a mess.
I don´t like to code "or" so many times.
I know the syntax in c# for array and loops but i can´t find anything of that in sql server.
I am thinking of looping it or make an array but...
declare @racer1 as varchar(50)
declare @racer2 as varchar(50)
declare @racer3 as varchar(50)
declare @racer4 as varchar(50)
declare @racer5 as varchar(50)
declare @racer6 as varchar(50)
declare @racer7 as varchar(50)
declare @racer8 as varchar(50)
set @racer1 = 'Piquet'
set @racer2 = 'Alonso'
set @racer3 = 'Massa'
set @racer4 = 'Heidfeld'
set @racer5 = 'Hamilton'
set @racer6 = 'Kovalainen'
set @racer7 = 'Raikonen'
set @racer8 = 'Trulli'
if @racer1 = @racer2 or @racer1 = @racer3 or @racer1 = @racer4
or @racer1 = @racer5 or @racer1 = @racer6 or @racer1 = @racer7
or @racer1 = @racer8 or @racer2 = @racer3 or @racer2 = @racer4
or @racer2 = @racer5 or @racer2 = @racer6 or @racer2 = @racer7
or @racer2 = @racer8 or @racer3 = @racer4 or @racer3 = @racer5
or @racer3 = @racer6 or @racer3 = @racer7 or @racer3 = @racer8
or @racer4 = @racer5 or @racer4 = @racer6 or @racer4 = @racer7
or @racer4 = @racer8 or @racer5 = @racer6 or @racer5 = @racer7
or @racer5 = @racer8 or @racer6 = @racer7 or @racer6 = @racer8
or @racer7 = @racer8
begin
declare @msgError as varchar(50)
set @msgError = 'Error: Same racer name'
select @msgError
end
else
begin
declare @msgOK as varchar(50)
set @msgOK = 'OK: Diferent racer'
select @msgOK
end
What do you think?
thanks
|
|
|
|
|
Everyone has a Split function right! If you don't the why not!
<br />
DECLARE @racer1 VARCHAR(1000)<br />
set @racer1 = 'Piquet,Alonso,Massa,Heidfeld,Piquet,Hamilton,Kovalainen,Piquet,Raikonen,Trulli'<br />
<br />
SELECT Item, COUNT(*) Dupes<br />
FROM dbo.fn_Split(@racer1, ',')<br />
GROUP BY Item<br />
HAVING COUNT(*) > 1
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Try this method
Declare @tblRacer as table(Racer varchar(50))
insert into @tblRacer(Racer)
select 'Piquet'
union all select 'Alonso'
union all select 'Massa'
union all select 'Heidfeld'
union all select 'Hamilton'
union all select 'Kovalainen'
union all select 'Raikonen'
union all select 'Trulli'
union all select 'Trulli'
if ((select count(Racer) from @tblRacer group by Racer having count(Racer) > 1) > 1)
begin
declare @msgError as varchar(50)
set @msgError = 'Error: Same racer name'
select @msgError
end
else
begin
declare @msgOK as varchar(50)
set @msgOK = 'OK: Diferent racer'
select @msgOK
end
|
|
|
|
|
thanks for the posts i am going to see what i make from it
about hamilton i really don´t care!
|
|
|
|
|
Hello,
I was wondering if someone could help me with something SQL related. I'm trying to create a table that has the following granularity:
Client
Unit
Month
Year
I then want a few calculations at that granularity. The calculations are:
Amount
Sales
Costs
Amount is just a straight sum of the amount field in the existing table.
But to determine sales and costs I need to evaluate based on a n_code. So when the n_codes is > 5000 then Costs = sum(amount). When the n_codes is between 4000 and 4999 then Sales = sum(amount).
I do not want the n_codes in my output.
|
|
|
|
|
Try using Case statements.
Example:
Select
Sum(Amount) as Amount
,Sum(Case When n_Codes > 5000 Then Amount Else 0 End) as Sales
,Sum(Case When n_Codes Between 4000 and 4999 Then Amount Else 0 End) as Costs
From ...
Join...
...
That should give you something to work off of.
|
|
|
|
|
Lash20, use Scott's approach since it's much simpler than the ones I suggested.
Mika
I think that after midnight I shouldn't try to solve anything more complicated than how to get bed...
|
|
|
|
|
Hi,
If I understood you question correctly, you have several options. Few of those:
1. Create a query where you decide in which column you bring amount. Use CASE WHEN structure for decision and then use outer query to calculate sums and counts. For example
SELECT SUM(Cost), SUM(Sales), SUM(Amount)...
FROM (
SELECT Cost = CASE WHEN n_codes > 5000 THEN Amount ELSE 0 END,
Sales = CASE WHEN n_codes >= 4000 AND n_codes < 5000 THEN AMOUNT ELSE 0 END,
Amount
FROM...)
2. Use scalars for columns, for example
SELECT SUM(Amount),
(SELECT SUM(Amount) FROM ... WHERE n_Codes > 5000 AND correlation to outer part if any),
...
FROM ...
Haven't tested those so there may be typos and such but they should give you the idea.
Hope this helps,
Mika
|
|
|
|
|
Thanks guys worked like a charm..
|
|
|
|
|
This isn't a programming question... so thats why i'm here
I'm working on a system thats working with some stock market data (this is a personal project at home) The system is collecting about 7 million records a week, and its really starting to slow down. I've done all the SQL optimizations i can think of (indexing, and what not) but i'm running into a road block with hardware i think... during the day when its downloading new quotes its flatlined at 100% CPU usage all day, and i'm seeing an increase of errors due to timeouts (when attempting to submit more data to sql). Currently the database engine is running on 1 dell poweredge 2550, i have 4 other servers with similar hardware (and 5 more that aren't really servers) so i'm looking for a way to distribute the load amoung the other servers. I've started to think about clustering, but i believe thats only for error tolerance (incase the server goes down) anyone have anyother suggestions?
modified on Wednesday, July 23, 2008 6:11 PM
|
|
|
|
|
Don't bother watching poorly-performing stocks?
Roll-up and/or archive old data?
Only store new records when some threshold difference is reached?
|
|
|
|
|
Distributed Partitioned views are ideal for this massive data scenario.
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
i'm thinking this is what i'm going to do.
|
|
|
|
|
With a bit of reengineering, I'd consider moving the load process out into a message queue and then load the data into a separate database on a separate server. Then, periodically do a bulk insert from this table.
modified on Wednesday, July 23, 2008 3:49 PM
|
|
|
|
|
StevenWalsh wrote: its flatlined at 100% CPU usage all day
What process is usingthe CPU?
What is the physical disk subsystem? IDE drives can be very processor intensive, especially when writing.
StevenWalsh wrote: I've done all the SQL optimizations i can think of (indexing, and what not)
What indices do you have? Adding indices will only help querying performance - it will slow down inserts, updates and deletes.
Are you in auto-commit mode, with an implicit commit after every insert, or do you perform number of inserts in a transaction and then commit? Auto-commit hurts performance since the database engine updates the indices for every insert.
As a suggestion, if you are not querying the data during the day, you could consider writing it to a formatted text file, and use SQL Servers bulk copy to load it once per day.
|
|
|
|
|
Graham Bradshaw wrote: IDE drives can be very processor intensive, especially when writing.
I thought that this was true only when PIO mode was active, but this was a thing of the 90's and newer disks used DMA, am I wrong?
|
|
|
|
|
It depends on the drive itself, the controller hardware and BIOS, and the driver used by the OS.
Put it this way - there's a good reason why high performance servers use SCSI/SAS.
|
|
|
|
|
Actually the servers are using a SCSI controller with 4 10k rpm drives. i'd have to check the auto-commit tip, never thought about that before i can't do the bulk copies unfortuantely since i query the data during the day.
|
|
|
|
|
StevenWalsh wrote: i can't do the bulk copies unfortuantely since i query the data during the day.
You can if you load the data into a staging database and use a linked database with a cross database view on it. Then, in the quiet periods do the bulk insert.
|
|
|
|