|
If you use an ORDER BY in your query, you could turn your original query with the TOP 2 into a subquery and select it again but reversing the direction of the ORRDER BY and setting TOP 1 on the outer query.
EuroCPian Spring 2004 Get Together[^]
"You can have everything in life you want if you will just help enough other people get what they want." --Zig Ziglar
|
|
|
|
|
I populate my object with data from the database into arraylist. Later I perform add data using the ilist components to the object. The problem now is how the changes in the object will take effect on the database? Do I need to perform insert command to database and add into object at the same time.Please HELP!!!!
|
|
|
|
|
What language are you using? IF you using .NET platform why don't you use DataSet instead?
Mazy
"A bank is a place that will lend you money if you can prove that you don't need it." - Bob Hope
|
|
|
|
|
How do I get minor errors in OLE DB Provider. Like if I get E_FAIL, how wd I get the minor error associated with it?
|
|
|
|
|
you have everything in HRESULT i dont thing there is anything related to minor code.
you have the following things.
HRESULT_CODE
HRESULT_FACILITY
HRESULT_FROM_NT
HRESULT_FROM_WIN32
HRESULT_SEVERITY
pick your choice now.
P.R.A.K.A.S.H
|
|
|
|
|
No, you have a result in HRESULT; generally this is E_FAIL. However, mostly, you can then use IErrorInfo with the i/f that failed to get more useful information. I'm fairly sure, for instance, that in the ATLDBCLI.H shipped with VC6.0 there's a function that you can call to rip the info out and dump it to a trace window (OutputDebugString or ATLTRACE). You could check that function and see how to do it yourself.
Steve S
|
|
|
|
|
Hi,
I am developing a Windows.Forms app. I am using a multi tier architecture with databinded strongly typed datasets . I am using a data access component called Facade where I have a method:
public Sub LoadCompany(ByVal companyID as Integer, ByRef ds as CompaniesDataSet)
ds.Clear()
...code to set the parameter in the select command
DACompanies.Fill(ds)
End Sub
It seems that ds.Clear() takes more and more time to complete , from a few miliseconds at start to 4-5 seconds after 100 method invokes.
Anyone has an answer to this?
Thank you,
Dan Bunea
|
|
|
|
|
It would be helpful if you could post approximate sizes: how many tables are in your DataSet? How many rows in the DataTables? How big is a row?
I suspect you're suffering from a mid-life crisis[^]: the garbage collector is having to collect a lot of medium-aged objects. Instead of passing in a reference to a DataSet , you should probably just create a new one within LoadCompany and return it. Turn LoadCompany into a function:
Public Function LoadCompany(ByVal companyID As Integer) As DataSet
Dim ds As New DataSet
' ...
DACompanies.Fill(ds) Keeping the dataset around typically does you no favours, it just wastes memory. A DataSet object is not terribly expensive to create.
Earlier today I was looking at System.Data.dll using Lutz Roeder's Reflector[^], which seems to indicate that Clear doesn't delete any tables already contained in the dataset. It empties them, yes, but doesn't delete them. The DataTable keeps its DataRows in an array, whose size doesn't change when Clear is called (although the rows are freed). You might find that memory is being wasted if a large number of rows have been used in the past.
If you do decide to keep the DataSet around, Reset will clear any contained tables.
Don't fight the garbage collector.
On a final note, there's no need to pass a reference type by reference, unless you want to make the original reference (in the calling function) point to a different object. It's important to understand the difference between a value and reference type. I simply think of a reference type as being the same as a C++ pointer.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Hi,
Thank you for your answer.
The DataSet is bound to very many controls (about 50) so reinitializing it is not a solution, because I lose my bindings (of course during load they are suspended, throught their binding context).
I have posted a more detailed explanation about this (with screenshots and code) at:
http://www.geocities.com/danbunea/posts/DataSetClear.htm
Could you tell me more about using Reset() because MSDN documentation doesn't seem to offer too much info.
About ByRef you are right, I could have used ByVal.
Thank you very much for your answer.
Dan
Dan
|
|
|
|
|
I see you're using a typed dataset generated by Visual Studio from an .xsd file. Using Reset on a typed data-set doesn't help, it just breaks the association between the typed DataTable-derived classes and the DataSet's Tables property. It therefore clears the data the first time, but subsequent calls to Reset do nothing. My bad.
I wonder whether retaining the complete set of data (i.e. for all companies) and using a CurrencyManager object to manage the currently displayed company would work better than clearing the dataset when fetching the next company's data?
Looking at your code again, I note that you're passing the dataset to Fill, not the dataset's typed data tables (objects derived from DataTable). If you don't have a DataTableMapping assigned to the DataAdapter, I believe the new data will be added to the DataSet as 'Table' (if you have multiple result sets returned from the query, the second result set will be returned as Table1, the third as Table2, etc). I don't know if this is happening, but it may be better to bind directly to the tables.
Me.DACompanyContacts.Fill(ds.CompanyContacts)
Me.DAGrades.Fill(ds.Grades)
Me.DAQuestionsLookup.Fill(ds.QuestionsLookup)
Me.DAEmployees.Fill(ds.Employees)
Me.DAInvoices.Fill(ds.Invoices)
Me.DATrainingCourses.Fill(ds.TrainingCourses)
Me.DATrainingNeeds.Fill(ds.TrainingNeeds)
Me.DATrainingPlans.Fill(ds.TrainingPlans)
Me.DACompany.Fill(ds.Company) This may be worthwhile anyway because the work to go through the schema mappings is quite extensive. If it is happening, you may have a lot of stray DataTable objects hanging around.
I can't see any other problems with this code, so I'm assuming that it's a memory problem. You may want to try the CLR Profiler[^] to study the memory behaviour of your program.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Hi,
Thanks again for your answer. I decompiled the Reset methods and saw that it uses Clear internally. Probably I'll just move to .NET 1.1 and see what happens.
The loading of the tables works nice so I do not have to
Me.DACompanyContacts.Fill(ds.CompanyContacts) , just using it as I did works fine.
Thanks again,
Dan
|
|
|
|
|
I am returning a DataSet in which I have nearly 50,000 rows in a DataTable from a COM+ Server application. When I am invoking the method which returns that dataset sometime it gives out of memory exception sometime after taking huge time it's returning the dataset. If I make it COM+ library application, it's becoming much faster. What to do.... I think the scenario will remain same for Web/Windows application. I can't compromize about the data, it will increase day by day.
Dhruba
|
|
|
|
|
when you transfer your dataset are you simply just:
Shifting records ? locking your program until the records clear ?
If so I reccomend you run the function on its own thread and this way you can wrap a try and catch.... so when the error is caught issue the "continue;" command.
Hope this helps
|
|
|
|
|
No offense, but this sounds like a flawed design.
I presume you are transfering the whole dataset as an object using remoting. This means the whole thing gets serialized as XML (even if you override the serialization for the dataset to binary, the contained datatable will serialize to an xml diffgram first - then to binary.) The result is a lot of memory consumed at both ends for serialization/deserialization, as well as a significant amount of time. One solution is to convert the datatable into an array or collection, serialize that as binary, then recreate the dataset at the client end.
if you need the metatdata for the dataset, not just the data, you should create a copy that has an empty datatable, and pass that, then pass the data and reconstruct at the client.
This will reduce the memory footprint by as much as an order of magnitude.
Even so, if the dataset is growing without bound, you will ultimately run out of memory. A better aproach is to transfer the data a few rows at a time, and preferably restrict the transferred data to the 'needed' rows only.
Good luck!
Power corrupts and PowerPoint corrupts absolutely. - Vint Cerf
|
|
|
|
|
I am trying to call Oracle store procedures from MSSQL via linked server connection.
For every kind of oracle store procedure, I am receiving several error messages coming from oracle OLEDB or ODBC provider telling that there are syntax errors.
I am looking for a solution.
Thanks.
|
|
|
|
|
Show the complete code that you're using to call the Oracle stored procedure, please, including OPENQUERY or whatever else, the error text, etc.
Thank you.
Jeff Varszegi
|
|
|
|
|
I am trying to connect to an excel file to read the data in ASP using ADO. I have established a connection but I am not sure why the following SQL statement does not work:
strCmd = "SELECT * from Prices"
I get an error saying:
The Microsoft Jet database engine could not find the object 'Prices'. Make sure the object exists and that you spell its name and the path name correctly.
Why is this error being caused?
'Prices' is the sheet in the excel file that the data is in, is this the correct thing to enter?
Thanks
|
|
|
|
|
According to the MSDN documentation you either need to name the range in the Excel file or use the $ after the object you reference in the spreadsheet. Try something like:
strCmd ="SELECT * FROM [Prices$]";
I can't remember if you need the square brackets but I know the $ helps. If you want more info on naming the range in EXCEL, look at the help for Insert/Name command.
Hope this helps.
Jeff
|
|
|
|
|
Actually, "prices" would look for a named range of that name, while "prices$" looks for a sheet named 'prices', which is why his query fails.
Power corrupts and PowerPoint corrupts absolutely. - Vint Cerf
|
|
|
|
|
1) What is diference between varchar and nvarchar?
2) Anyone knows what is maxlenght of a E-Mail and of a URL?
Thanks for all
|
|
|
|
|
|
Hopes this helps...
Question 1
Char[(n)]
Fixed-length non-Unicode character data with length of n bytes. n must be a value from 1 through 8,000. Storage size is n bytes. The SQL-92 synonym for char is character.
*Use char when the data values in a column are expected to be consistently close to the same size.
*Use varchar when the data values in a column are expected to vary considerably in size.
nvarchar(n)
Variable-length Unicode character data of n characters. n must be a value from 1 through 4,000. Storage size, in bytes, is two times the number of characters entered. The data entered can be 0 characters in length. The SQL-92 synonyms for nvarchar are national char varying and national character varying.
Question 2
MAXLENGTH sets the maximum number of characters for text or password fields.
I hope that answers question 2... if not please explain?
Scratch... Thanks Jon I misunderstood his question...
|
|
|
|
|
Thank you John and Bryan
My question is because I need create Email and URL fields in my Database and I would like sugestions to size of this fields...
|
|
|
|
|
An nvarchar column is interpreted and stored as UTF-16. If your source language uses UTF-16 for native strings (e.g. VB6, C#, Java, VB.NET), use nvarchar to save all the overhead of translation to and from byte-oriented character sets, and any possibility of trying to store characters not present in the configured collation's byte-oriented character set (which can lead to round-tripping problems). If your source data is in a byte-oriented character set, it may still be worth using nvarchar to avoid problems in conversion between different character sets, if your database is configured with a collation that uses a different character set to your source data.
There is no maximum length for email addresses or URLs. They could be any length. To store these in SQL Server, use a text or ntext column.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Hi Mike, thank you for the response
What you sugests? I wonder between 50 and 255 length to e-mails and URLs...
URL and E-mail sizes appears vary greatly and I do not wish waste space setting a field too large and do not wish lost important data because of a small field, understand?
Again, thank you very much!
|
|
|
|