|
I suspect an UPDATE could be of help, since they are atomic, much like an interlocked increment.
UPDATE table SET status = 'processing' WHERE status = 'pending' AND PrimaryKey = @value;
if row count > 0
-- Do the processing
else
-- Some other client is processing...
end if
--
Kein Mitleid Für Die Mehrheit
|
|
|
|
|
Ok, this could be another solution, and simpler, too. Thanks.
|
|
|
|
|
Yup, much easier!
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
According to MSDN,
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.
It sounds to me like you would be fairly safe with this setting. Certainly it would be safer than the default.
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
Hi All
Question move from C/C++/MFC Message Board to here.
I have a problem to use syntax of MSSQL.I use this syntax
use databasename
it is not working for me.I check out through this code
SELECT db_name()
then result show MASTER.
Can any one give me tips where i wrong to use syntax.
Plz help me
|
|
|
|
|
If you write
use databasename it will throw error
Msg 207, Level 16, State 1, Line 1<br />
Invalid column name 'databasename'.
But if you write
use master it will say
Command(s) completed successfully.
Niladri Biswas
|
|
|
|
|
if i use
use master
And i want to create some table in different database then what i will do?
|
|
|
|
|
Dear All,
I use SQL Server 2005.
I want to creat Folder If have that folder name ready I want to rename it or delete old folder.
About File is the same.
Have any command for do it ?
Thanks for help....
VB.Net
|
|
|
|
|
You need to use xp_cmdshell, but you will probably have permissioning problems! Why do file activities from sql server?
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
I believe this can be done via
xp_cmdshell
In the below example I am creating a directory initially and next time if such a directory is found, I will rename that
declare
@cmdpath nvarchar(60)
, @Location nvarchar(100)
, @message nvarchar(max)
set @Location = N'C:\NewFolder'
set @cmdpath = 'MD '+ @Location
create table #result(result nvarchar(255))
insert into #result (result) exec master.dbo.xp_cmdshell @cmdpath
select @message = ISNULL(@message + ' - ','') + result from #result where result is not null
if(@message is not null)
begin
EXEC xp_cmdshell 'RENAME C:\NewFolder RenamedFolder'
end
select @message
drop table #result
For more info you can visit
a) Using xp_cmdshell[^]
b) xp_cmdshell[^]
Niladri Biswas
|
|
|
|
|
Why are you using SQL Server to interact with the file system, this is generally not recommended as it exposes the file system to database users. Most server environments will not allow that.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks.
the reason is that the data in backup to txt file
so some have to rename, delete or create new.
about Folder when delete it ask are you sure (Y/N)?
how can type Y ?
VB.Net
|
|
|
|
|
Your statement does not make a lot of sense to me. You back up your database to a text file? Are you nuts?
Ok what database are you using that does not have a backup facility to properly backup and restore your data. As someone suggested you can use xpcommand shell but it sounds like a nightmare to me. Permissioning will be another issue as well.
I would recommend you TOTALLY rethink you data management.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks you so much.
VB.Net
|
|
|
|
|
To be honest, though I proposed a solution, but I agree with you. If the person is using .net, it is better to go ahead with SMO.
In any other technology, there must be some option to go ahead.
Niladri Biswas
|
|
|
|
|
I use SQL Server 2005.
Select eTime from Table1
it gives e.g 2009-12-10 10:11:00'
but i want to display date only like '2009-12-10' using Select Statement
Help please
|
|
|
|
|
Try this
select replace(CONVERT(varchar(10),getdate(),111),'/','-') dt
OR
select replace(CONVERT(varchar(10),getdate(),102),'.','-') dt
Output:
dt
2009-12-18
So in your case it will be
select replace(CONVERT(varchar(10),eTime,111),'/','-') eTime from Table1
OR
select replace(CONVERT(varchar(10),eTime,102),'.','-') eTime from Table1
Niladri Biswas
|
|
|
|
|
Hi,try this
SELECT convert(varchar(10),CONVERT(DATETIME,'2009-12-10 10:11:00'),120)
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
I'm creating a database that can send private messages. The system utilized MSSQL database engine and uses relations to constrain the data inserted.
Problem is that users can send mass-messages and that can add up quickly if each row in a table represents a message to a user. My question is: Is it better use one row per message or use a comma separated list of users?
I would think that the table with the ID's of the message and users (many-to-many relation table) can become REALLY large quickly. A comma separated list would keep the number of rows down.. What is more effective?
|
|
|
|
|
assuming that the recipient list are also users
User Table Columns
UserID+User
Message Table Columns
MessageID+Message
Sent Table Columns
UserID+MessageID (using the senders user id)
Receive Table Columns
UserID+MessageID (using the recipients user ids)
use csv if recipients are not users or you don't need to track
don't use csv if you intend to parse later - not effective
|
|
|
|
|
Your proposal is actually very close to my current database schema. As for the CSV, If the users need to delete a mass-message, I need to keep a list of unique receivers. This can be represented by rows or CVS. I need to edit the CVS in case a user delete a message. I need to remove a row in case I use rows.
It is simply a matter of what is the most effective method after a (long) period of time. I would guess that a table with >1.000.000 rows would take a long time for the database to process.
Then I rather have to use a little more memory and CPU time on parsing a comma separated list.
Any experiences on this matter? Any good db schemas I can look at? (I'm thinking forums here).
|
|
|
|
|
Yes, lots of experience. Just finished a db optimization contract(job). The client was adding a million records every couple of days. Have been doing the same for 20+ years.
The Receive Table may have a lot of records in it but only 2 columns and those columns will make up the primary key. To query for a given record, you will use the pk. Clustered, indexed, primary key 2 columns only - should be able to get any given record in about 3 milliseconds or less with 10 million records in it. Subtract a ms or 2 if your id values are numeric. This type of table will not take up much space.
This will be thousands of times faster than searching for a partial string value in text where you will won't be able to find the user by using an index - at all. (not easily anyway)
Searching for a string in a text type field is a VERY time consuming operation for a db to do - not a good idea.
I would create test scripts and data to validate if you're not convinced. Let me know if you need help.
What db are you using?
|
|
|
|
|
Sounds great. I don't have a lot of experience on the inner workings of SQL databases, so I was not sure what the best approach was. I will build the database without CSV and hope the database is clever enough to find a single (or range of) row within millions of rows without any performance problems.
The current database schema works exactly like that (2 PK) on numeric datatypes.
And you are right, somehow I did not take the partial text matching into account. That would take ages. That alone would keep me from using CVS.
As a side note, the database is a MSSQL 2008 Express engine. Thanks for your support. I really appreciate it.
|
|
|
|
|
If your problem is the number of rows, one million rows is not large for a modern RDBMS. If it is indexed properly, it will cope with that with absolutely no problem in terms of simple queries or updates.
Things get a bit hairier if you're doing more complex manipulations, using cursors or anything like that, or doing wildcard queries like '%XYZ%', that sort of thing.
|
|
|
|
|
The database schema is created in such a way that data that needs complex queries are placed in their own tables. The table in question is a 2 column composite key table. From what I've gathered here, the engine should have no trouble doing simple queries against several million rows.
Thanks for the clarification though. Does any of you by chance have any good articles on database optimization and inner workings of RDBMS?
|
|
|
|