|
Hi all,
for my app I retrieve 'SOAP' data (using the PayPal Api), which have to be written to an Excel file in the end. I don't have any experience with web applications, XML, serialization etc., so that I'm afraid the idea of assigning each field to the properties of a 'Transaction' class might be useless or at least a long way round.
From my logfile I can see that it's transferred in the format
<Status xsi:type="xs:string">Completed</Status><GrossAmount xsi:type="cc:BasicAmountType" currencyID="EUR">41.80</GrossAmount> and I try to fill the class this way ((...).= abbreviated):
For Each transactionDetail As GetTransactionDetailsResponseType In detailsList
If transactionDetail(...).LastName IsNot Nothing Then
Dim t2 As New Transaction
With t2
.Transaction_ID = transactionDetail(...).TransactionID
.Street1 = transactionDetail(...).Street1
.Quantity = Val(transactionDetail(...).Quantity)
End With
End If
Next From some web sources – which all seem to be written for experts – I assume there must me much easier (and less error-prone) ways to process the data, but I don't know where to start or find code examples.
Would someone be so kind as to give me some guidance?
Thank you very much,
Mick
|
|
|
|
|
Honestly, AFAIK, there is nothing that is much easier than this.
One could use reflection to read the properties of the proxy object and then assign them to similar properties in your model - but then you need to ensure the model property names match, and that does sound difficult.
Perhaps someone else can provide a better solution.
|
|
|
|
|
Thank you for the advice, anyway!
Abhinav S wrote: use reflection to read the properties of the proxy object and then assign them to similar properties I'd still give it a try, since I know all the names and datatypes so that I could rename my 'transaction' class' properties. But how would I code this? Could you probably give me a few lines of sample code?
|
|
|
|
|
Hi
I am having a problem with "Too Many Connections" in accessing MySQL from VB.Net.
The issue is I am loading huge amount of data from Excel to MySql using VB Grid. There is a check that if the data is already available then it will not be loaded based on certain key. If a lot of this Data is already available in the MySQl Table then even though I am closing the Connection using MySqlCommand.connection.Close() it still leaves a thread in the MySql resulting in a huge number of connections still open.
Can anybody help me in solving this issue?
Thanks in Advance
|
|
|
|
|
I had a similar problem with one of my programs and my solution was to open the connection at the beginning of the program, then re-use the connection each time I had to query the database.
The best performance is if you have a single SQLcommand and make use of SQLparameters so that you only change the value of the parameter for each different query.
Basically one connection, one sqlcmd, keep changing the value of the parameters.
See if that works for you.
|
|
|
|
|
Considering the cost of a connection license to some SQL Servers can be very expensive, and the fact that the connection would be completely idle for the vast majority of your applications lifetime, that is a horribly bad practice.
Standard "play nice and share with others" practice is to open the connection, execute your query, then close the connection as soon as possible. Do that and you won't hog the license preventing others from using it and pissing off the DBA's.
|
|
|
|
|
You are right Dave, But I think there is a little difference between SQL and MySQL. In SQL , I am not very conversant but I think, when you close a connection it actually closes the thread. But in MySql, as far as I have been working on it, I find that even if you close the connection it actually does not close it but stays in "SLEEP" mode and hence creates problems. BTW I am using the command MySQLCLIENT.Connection.Close() to close the connection. This command unfortunately leaves the connection as it is as seen using processlist. If I am using a wrong command or there is another command to close the connection, I would really like to know that.
Anyway thanks a lot for the input.
|
|
|
|
|
I don't use MySQL and so am not an expert on it. I just pointed out that the practice of holding open a connection for the life of an application is greatly frowned upon and should never be used in production code.
Fix the problem, don't work around it.
|
|
|
|
|
MySQLCLIENT.Connection.Close()
Try calling Dispose when you're done. Wrap 'em in a using block, and they get cleaned up correctly, without the need to call dispose explicit.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Maybe I should have been more clear. My applicaiton was a very short lived program which executed about a thousand queries. The whole program lasted less than 3 minutes.
It was kind of a crazy situation where I was processing data from one resultset (MS-SQL) and issuing queries to another server (AS400 DB2 via ODBC). I found that creating one connection, one SqlCommand and changing the value of the parameters gave me the best performance.
I agree that generally you don't want to hold onto a resource any longer than necessary, but in my case the overhead of opening and closing connections to the other server was more than I wanted.
Sorry I can't be more helpful with a mySQL specific answer. I'm just telling you what worked for me.
BTW: After re-reading the requirements of the original post, I believe my method of making one connection at the top of the application would be just fine. The author is loading data from Excel. Right? How long could this program take to run? A few minutes at worst. Having a single connection to the database for that period of time should be no problem.
|
|
|
|
|
Thanks David. It really did work.
|
|
|
|
|
You might want to look at this[^].
and if that doesn't work, go through these[^].
|
|
|
|
|
I need a Visual Basic code for a 25x25 sodoku game...please can anyone help me....
modified 23-Aug-12 21:58pm.
|
|
|
|
|
As a matter of fact this is not urgent at all. Leave your homework to the last minute?
Why is common sense not common?
Never argue with an idiot. They will drag you down to their level where they are an expert.
Sometimes it takes a lot of work to be lazy
Please stand in front of my pistol, smile and wait for the flash - JSOP 2012
|
|
|
|
|
It may be urgent to you but not to anyone who volunteers their time answering questions here.
Simply put, your procrastination has gotten you in trouble. Good luck!
|
|
|
|
|
Not only you've left your homework for the last moment... you've not even attempted to google...
In my first attempt with "visual basic sudolu code" I've found this: Visual Basic Sudoku Solver and Generator[^]
Which is an article here in CP that shows how to do it.
|
|
|
|
|
|
Only 7 hours?? You need to wait at least 24 when you post on sites where people volunteer their time answering questions, including CodeProject.
Why are you doing it this way instead of just binding the grid to the DataTable?? I see no reason to do otherwise since you're using a DataTable as a backing store. If your using a DataTable, I don't see why you're using Virtual mode. DataTables are rather medium weight objects for storing data (not very efficiently) and are good for small-ish tables.
If you're using DataTables with Virtual Mode, your kind of defeating the purpose of Virtual Mode.
|
|
|
|
|
Thank you for your answer.
To answer your question why, I have written an Ad-Hoc query tool for a very large database. I have a query that is returning 300K+ records and it takes the DataGridView forever to load all those records when the DataTable is bound to the DataSource property. Hence why I choose to use Just-In-Time data loading.
Apparently the facts in my post were insufficient in revealing I was dealing with large DataTables. My apologies for being unclear with the facts I provided.
|
|
|
|
|
Phoenix Hawke wrote: ... returning 300K+ records and it takes the DataGridView forever ...
Such a design deserves to fail. Nobody will want to read so many lines on screen.
|
|
|
|
|
Bernhard Hiller wrote: Such a design deserves to fail. Nobody will want to read so many lines on
screen.
Thank you for the enlightening review of my design.
Let me make it clear what an Ad-Hoc Query tool is so the design will make more sense to you. An Ad-Hoc query tool allows users to design queries in a query designer much like MS Access using one or many tables and various fields. Given there is such a large amount of flexiblity with this kind of tool a software designer has to take into account queries that will return a large amount of data. I realize that users aren't going to want to see a large number of of records. However, they aren't going to want an interface that takes 10 minutes to load in order for them to realize that is the issue with the query they designed.
There is one more point I would like to bring is forward. You offered absolutely no kind of solution to the issue. If you were trying to help then you to have failed in your design. However, unlike you I will offer another solution to your design. That is, if you aren't going to provide any kind of useful or well thought out suggestions but rather just criticism then please do me favor and move onto another thread where someone maybe enlisting that kind of feedback.
Thanks
|
|
|
|
|
Also we face sometimes the problem, that a user selects filter values for the data which would yield too many rows to be shown. And what do we do there?
We use a threshold for the row count. Of course, that threshold can be configured. If the row count is more than the threshold, a warning message will be shown, and the user can decide to continue with the crazy amount of data (well, he was shown a warning...) or stop to set new filter values.
|
|
|
|
|
Bernhard Hiller wrote:
Also we face sometimes the problem, that a user
selects filter values for the data which would yield too many rows to be shown.
And what do we do there? We use a threshold for the row count. Of course,
that threshold can be configured. If the row count is more than the threshold, a
warning message will be shown, and the user can decide to continue with the
crazy amount of data (well, he was shown a warning...) or stop to set new filter
values.
Thank you for your reply.
That is indeed a very good suggestion. So how might I put this into pratice? Is there some event within the SqlClient.SqlDataAdapter Class that fires during the execution of the query that tracks the rows returned during execution? Or is this something that is implemented during the loading of the records into the DataGridView? I found that using a For Each statement to load records row by row is very slow and inefficent. Is there another method that can be used to create a way to stop the load after so many records have been loaded into the DataGridView?
Thanks
|
|
|
|
|
I do not know if there are events for that (I do not like the DataAdapters, and normally use a DataReader). Before doing the query to fetch the data, you can do a query to get the row count by a simple query : SELECT COUNT(*) FROM (original select query) . Since Microsoft SQL Servers caches a lot, that's not a costly overhead.
|
|
|
|
|
Thanks for the assistance. I will give that try.
However, I am still left wondering why the DataGridView is working differently for the same query on 2 different attempts. Why it's fast the first time but hangs the second and why the blank rows have data from the DataTable when the DataGridView isn't bound and no data has been looaded.
Perhaps someday I will find out why.
|
|
|
|