|
I am using Openrowset function of sql server to read data from excel files.
SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=E:\IGM\Files\cis overview.xls;HDR=No;IMEX=1;','SELECT * FROM [CIS Overview$]')
But all columns are not showed by openrowset.
below is just sample, the excel file which i am reading having 28 columns, and 27 having no data in it, just heading, and 28 is ignored by Openrowset.
col1 col2 col3
--------------
A X
B X
C X
. .
. .
Openrowset is not returning col3 in resultset?
Its only showing col1, col2.
But if i insert some data in col2(any cell),
result set will have all three columns.
It is ignoring last column if previous has no data?
i also include HDR=NO option in query, but no luck?
Is their any way to get all columns in result set?
any help?
regards,
|
|
|
|
|
Try putting headers in the first row of the spreadsheet and use HDR=Yes option.
|
|
|
|
|
For info: I'm using BlogEngine 1.6
I dunno what's happening with my blog, but removing any comment is so damn slow... like half an hour for the page to refresh after deleting 10 comment!!!
Desperate, I open the Sql Server Management studio and connected to my blog database and ran the following SQL:
DELETE FROM [codeblog].[dbo].[be_PostComment]
WHERE [IsApproved] = 0
Now If I open another query and ran
SELECT *
FROM [codeblog].[dbo].[be_PostComment]
ORDER BY IsApproved
I code plenty of result with IsApproved = 0
Worst the number of result return by such statement change at an alarming rate!
Right after my delete statement I got, say, 50 comment. A few second later (if re-run it) I get 150, and then 350, etc...
Argh.. what's going here!?!?! what COULD BE going on!
A train station is where the train stops. A bus station is where the bus stops. On my desk, I have a work station....
_________________________________________________________
My programs never have bugs, they just develop random features.
|
|
|
|
|
Maybe there is a spambot posting comments quickly, like say around 200 a second.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
|
|
|
|
|
What will happen if 1000 users trying to execute the stored procedure at a time?
How it will work? Single thread or multi thread or FIFO manner?
A person might be able to play without being creative, but he sure can't be creative without playing.
|
|
|
|
|
When multiple users try executing the same stored procedure, multiple instances will start running asynchronously. What happens next depends on the body of the stored procedure: if different instances of the stored procedure only read the data, or if they update distinct rows, all instances would run asynchronously to completion. Otherwise, some of them would block, following the usual rules of locking for your transaction isolation level. As a consequence, you can also force synchronization in the body of a stored procedure by selecting for update (or selecting with rowlock, etc.)
|
|
|
|
|
When a task request is received, SQL Server will attempt to allocate the task to an available worker thread. If no worker thread is available, a new one will be created to handle the task, up to the configured maximum number of threads. Once that maximum limit has been reached, task requests will queue until a thread becomes available.
As you might guess, it's quite a lot more sophisticated than that, but that is essentially what it boils down to. If you want to know more detail, there are plenty of articles on Google that will explain it in as much detail as you want.
|
|
|
|
|
I thought that I understood them, but they aren't working as I expected. Perhaps someone can help dispel my confusion?
I have a table that is comprised of an incrememnting, numeric primary key, a date field, and several other fields (that are irrelevant to the problem). It is range partitioned by day on the date field, thus creating a new table partition for every 24 hours of data.
The problem comes when I need to drop an old partition. I created the PK index as LOCAL, which (as I understood from the documentation) should have made it equipartition with the table. However, when the old partition(s) are dropped, it becomes unusable, and rebuilding the index can take hours (it is a VLDB).
Any ideas what I am doing wrong..? Thank you.
|
|
|
|
|
Found this answer from Ask Tom... It appears that I have no choice but to use a global index on the PK.
|
|
|
|
|
how to create a database that stores an images
mrjaya
|
|
|
|
|
Depends on your database server, there is plenty of information out there for SQL Server[^]
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi,
My query
-- parameter of the procedure
@abc int = null;
declare @txt varchar(100);
set @txt = 'select * from tablename where field1 = isnull(@fld,field1)'
set @txt = replace(@txt,'@fld',@abc)
exec(@txt)
when @abc has any value then its work fine, when @abc is null then @txt show null
how can i concatenate this string and execute it
Thankyou
Ypki
|
|
|
|
|
Initialize variable @abc with 0 value, not with NULL .
e.g set @abc=0
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.cacttus.com
modified on Wednesday, May 18, 2011 3:12 AM
|
|
|
|
|
A simple trick is to use COALESCE which returns you the first none null entry in a sequence of values, so you would replace the reference to @abc with COALESCE(@abc, '') .
|
|
|
|
|
try,
set @txt = replace(@txt,'@fld',isNull(@abc, 0))
take 0 or any other value if you find null value
hope it helps
|
|
|
|
|
Any operation on null will yield null. Set the initial value of @abc to something other than null.
|
|
|
|
|
Hi.
I have a stored procedure that queries a table that takes about 35 seconds to return the results, meanwhile, from other stored procedures, i cannot insert new rows into that table, how is this possible to do? (even if that mean that on the large results the new records will not appear)?
Thanks.
|
|
|
|
|
Depends on what you are trying to achieve. From what I hear your select statement is locking most of the rows in the table, which could cause a full table lock (read level). That would mean your insert would fail as it cannot get an exclusive lock on the table. The other way around is also possible as you have experienced. If the insert starts first the select will fail.
You could potentially fix this by hinting to the SQL Server to use no locking on the select (add with nolock after the from part from the select), but this will cause you to get dirty data and uncommited data.
|
|
|
|
|
Hi, Thanks for your answer, i have tried the (WITH NOLOCK) on the select query but it still takes time to insert the new rows when running the select. it will wait until the select finishes.
|
|
|
|
|
Like I stated the nolock is a hint given to the database, and the database server might choose to ignore it. From what I know, for at least for MS SQL, is that the nolock should solve the issue. But you could try to combine it with a ROWLOCK or PAGELOCK instruction for the insert statement, see locking hints for more on this.
Keep in mind that large sets of indexes could also cause the locking you are experiencing, as an insert will update the index and could trigger a re-index.
|
|
|
|
|
Is your SP speed acceptable? Or should it really execute much faster when done properly? Did you look into your indexing scheme?
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
The SP are fine and tabled are well indexed, just the search results are very complicated calculations running on millions of records , and the problem is that when running the select for client to see in application, no inserts are allowed.
|
|
|
|
|
I guess Gerben's answer applies then.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
I would certainly look into ways of speeding up the query. Perhaps copy the relavent data to another table and do the calculations there.
|
|
|
|
|
I agree that you might want to build a "reporting" table that is refreshed nightly, hourly, whatever and have the users query against that table. Sometimes the data doesn't need to be up to the minute ... just make sure the end user knows that the data may be somewhat stale.
I've used this method for a few reports and the first user who initiates the report takes the hit and creates the reporting view for the day. everyone request for the report goes against that dataset; each day a new dataset is created only if someone requests it. Each time a dataset is requested, all previous datasets are deleted.
Just a thought.
Good luck
|
|
|
|