15,886,769 members
Sign in
Sign in
Email
Password
Forgot your password?
Sign in with
home
articles
Browse Topics
>
Latest Articles
Top Articles
Posting/Update Guidelines
Article Help Forum
Submit an article or tip
Import GitHub Project
Import your Blog
quick answers
Q&A
Ask a Question
View Unanswered Questions
View All Questions
View C# questions
View C++ questions
View Javascript questions
View Visual Basic questions
View Python questions
discussions
forums
CodeProject.AI Server
All Message Boards...
Application Lifecycle
>
Running a Business
Sales / Marketing
Collaboration / Beta Testing
Work Issues
Design and Architecture
Artificial Intelligence
ASP.NET
JavaScript
Internet of Things
C / C++ / MFC
>
ATL / WTL / STL
Managed C++/CLI
C#
Free Tools
Objective-C and Swift
Database
Hardware & Devices
>
System Admin
Hosting and Servers
Java
Linux Programming
Python
.NET (Core and Framework)
Android
iOS
Mobile
WPF
Visual Basic
Web Development
Site Bugs / Suggestions
Spam and Abuse Watch
features
features
Competitions
News
The Insider Newsletter
The Daily Build Newsletter
Newsletter archive
Surveys
CodeProject Stuff
community
lounge
Who's Who
Most Valuable Professionals
The Lounge
The CodeProject Blog
Where I Am: Member Photos
The Insider News
The Weird & The Wonderful
help
?
What is 'CodeProject'?
General FAQ
Ask a Question
Bugs and Suggestions
Article Help Forum
About Us
Search within:
Articles
Quick Answers
Messages
Comments by Member 11683251 (Top 35 by date)
Member 11683251
19-Oct-16 3:58am
View
That seems to work, the files was created and correct data is in it.
Thanks a lot! Post it as an answer and I'll accept it. :)
Member 11683251
19-Oct-16 3:46am
View
OdbcDataReader myReader;
It runs the SQL query. This programs grab data from the sql and dumps to excel. Another excel imports this data into itself. I call this program from the importing excel program via shell.
Member 11683251
30-Oct-15 10:20am
View
I have tested running a small span of files this way and it works perfectly for my purpose. Was already capturing errors, or so I thought but your last comment made me realize that I should probably do some testing so I deleted a file and then allowed the program to continue and found a small bug but after fixing it logs it nicely and continues. :)
Member 11683251
30-Oct-15 9:57am
View
Well this we are retiring this system in a month so I'll be from VB6 then. I was thinking about writing it in .net c# but getting the layout of the form right might be a hassle.
I was thinking about multithreading but I feel like it would be too much of a hassle for this task, will probably take so long to get it working that the time saved/extra spent wouldn't justify that approach.
Wait for process to finish seems to be exactly what I need, not sure if it will add a few extra seconds per file but if It allows me to leave my computer on over the weekend while it does the work it doesn't matter.
Going to run with that because it shouldn't take to long to implement. Thanks for the help. Much appreciated.
Member 11683251
29-Oct-15 4:04am
View
That's a really helpful description, one of the main obstacles I tend to run in to is that I wish to solve my queries similarly to normal programming. If this do that, loop this etc. One work around has been in my programs to run multiple queries and then use that data to get what I want.
Member 11683251
29-Oct-15 2:06am
View
They must have updated it because it's working flawlessly for me. Easy to install and configure. Also realized that as long as I know where the file gets saved and what its named I don't need to send file path and name as arguments, I can just change it in my program after the print is done. I'm probably going to use bullzip a lot more in the future.
Member 11683251
29-Oct-15 2:03am
View
Tack :) Managed a work around but after being advised to use CTE's to solve other problems I cant help feeling that I actually should have been able to figure this one out. Anyway thanks for the time.
Member 11683251
28-Oct-15 10:21am
View
Thanks, I'll check it out. If I can get it to work as I want it would save a lot of time fore me.
Member 11683251
26-Oct-15 4:12am
View
After thinking this might help better: Muliple rows in fcontainerrow can have the same fcontainerid. Which Is why I sum fcontainerrow.units. Fcontainerrow.fcontainerid = fcontainer.id.
Each of these rows contain a column called extkeyid. The value here maps to tempo.lagernr in which multiple entries in tempo can have the same value here.
Member 11683251
26-Oct-15 4:07am
View
Still the same results, I realized that it's not tempo.temponr that exists duplicates from but it's the tempo.lagernr. Multiple entries in tempo can have the same lagernr. But select max(tempo.lagernr) instead still generates duplicates.
Member 11683251
2-Sep-15 8:58am
View
Thanks for the help, love it when something is easily done in sql as opposed to the twists and turns I've had to go through to make some queries behave as I want them to.
Member 11683251
2-Sep-15 8:57am
View
Thanks that was exactly what I was looking for. Much appreciated.
Member 11683251
14-Aug-15 3:23am
View
Added the message but I did manage to fix it by changing the definition by looking at how the other tables were created. Not entirely sure why it works thou.
Thanks for tanking your time helping me. :)
Member 11683251
14-Aug-15 3:03am
View
Thanks, it worked when I tested on one existing table but not on another one, posted the table definition, can you see what I missed?
Member 11683251
11-Jun-15 9:19am
View
Bullseye ;)
Select Distinct CTE1.* and duplicates are gone.
That combined with left join did the trick. Saw I wrote outer join in the last comment but was left join.
Thanks for hanging with me and solving it.
Member 11683251
11-Jun-15 8:43am
View
Asked on DBA stackexchange and have gotten some interesting info. One suggestion to use outer join showed one thing I didn't consider and that's not all fcontainers got a produktnamn reachable using CTE2. Which explains why the joining produces so much fewer results.
Figuring out how to get those names shouldn't be to hard to do for me once I've cleared out the duplicates.
Member 11683251
11-Jun-15 6:39am
View
I've gone over the CTE table several times now between other tasks and it really bugs me that is doesn't do what I think it should.
We have two tables, we do an inner join on them and ask to select a few things. If it was math it should be like an intersection of the two sets so why do I get such a weird result when they both check out fine if they are run simultaneously?
Tbh this has gone from a work task to something of a more private curiosity.
Member 11683251
10-Jun-15 8:34am
View
Did it in the CTE version of the problem first but I also tried it in the sub query and the same result, it only returns one record for the entire query. That's certainly one way to assure there isn't any duplicates ^^
Member 11683251
10-Jun-15 8:27am
View
But wouldn't that just result in a single record selected?
Member 11683251
10-Jun-15 7:45am
View
The two solutions differ slightly in the results I get, the CTE one returns slightly more than the other but what I noticed is that both do is return a lot of duplicate records.
The original query returns about 2500 records while these between 1700-1800 and that is with duplicates.
Take the CTE for example, running just CTE1 returns a set of 2500 records and CTE2 6600. So far it's exactly as expected.
I cant wrap my head around how the part which joins those two tables can manage to give me so many fewer records and with duplicates, some row triplicate or more. The way I interpret it is that we are referencing CTE2 and if we find a match with the id's we basically append the name. Not really what's happening but how I picture it in my head.
Here are the changes I made.
SELECT CTE1.*, CTE2.Produktnamn
FROM CTE1
INNER JOIN CTE2 ON CTE2.SubID = CTE1.FcId
ORDER BY CTE1.HyllId, CTE1.x, CTE1.y
It feels like learning to program all over again trying to learn sql, or rather more than just the basics.
Member 11683251
10-Jun-15 6:38am
View
I was just trying to solve this using CTE as it seems to be such a wonderful tool as you showed a while back in answer to another question but didn't think of doing it on the new query too. Your solution works with the minor adjustment where you joined the two tables where it should be CTE2.subId = CTE1.fcid and the table names in the order by should be CTE1.
The first answer where you join on to sub queries is error free but doesn't return anything but I'll check through it too since that seems so closer to what I was trying to do.
I thank you for taking the time to write this helpful answer, going to bookmark this one too.
Member 11683251
8-Jun-15 6:48am
View
Here is the results from my final test grabbing 24762 rows of data. 10 times as much as last time.
It took 102570ms to fill the listview. But writing from listview only took 187ms. So if you have already got a listview with data then it remains the quickest way to dump the data to file.
Iterating through the recordset and then printing took 97841ms so a bit quicker than first filling a listview.
Using copyfromrecordset took just 1810ms so it remains the single best way to dump large sets of data.
Another test using 11905 records gave similar results. 47284ms to create listview and 93ms to write to file.
Iterating through recordset took 48469ms while copyfrom recordset took 3354ms.
Lastly I added another case where I iterated through the listview and wrote to an .xlsx files instead by opening an excel book and worksheet and this took 18533ms.
I'm not going to go further but my conclusion is that .copyfromrecordset is by far the quickest way to dump large amounts of data in to a .csv and that preliminary test shows that using .xlsx is slower. If you don't require the formatting then go for .csv
Member 11683251
4-Jun-15 9:43am
View
It probably is, as I said the listview time doesn't take in to account the time it takes to popopulate the listview. If I got the time I'll check it tomorrow. My hunch is telling me that it will be a much worse option then.
But if you already got a listview with data then it seems to work fine. Will also see if I can grab a few times more data and see how the different ways behave.
Member 11683251
4-Jun-15 3:04am
View
I did three different tests. The query pulls 5 columns and I ran first 1208 rows, then 1832 rows and last test was 2330 rows.
I printed using three different methods, print from listview to .csv. Print to csv using .CopyFromRecordset. And Printing to .csv by moving through the recordset using movenext.
My results are for case1: 32,47,62 ms. Case2: 881,905,733 ms. Case3: 7457,8970,11462 ms.
Both listview and .copyfromrecordset seems fairly constant but walking through the recordset becomes slower fairly quickly. True I didn't grab a very huge amount of data but the pattern is quite clear so far.
Later I will also try and see if there is a difference with using .xlsx instead since that's what one of our programs do in once case and printing from listview here on a set of data that's 18k rows and 20 columns wide takes forever.
One thing that I didn't take in consideration this time was also how long it took to populate the listview which needs to be added to that time. Will also check this later but just copying from the recordset seems like a good idea at the moment.
Member 11683251
2-Jun-15 9:38am
View
Atm the data is stored in a listview which is populated by the user specifying dates to grab the data. But if it's indeed quicker to just transfer the data from a recordset it might be worthfile to save the recordset and for me to add a function to print the entire viewed record. Currently we export the data the user selects. My workday ends in just a few minutes but I'll test writing from a recordset when I get back at Thursday. Will be interesting to try a few different methods and benchmark them.
Member 11683251
2-Jun-15 7:41am
View
Using CTE seems perfect. Fits nicely in with how I'm thinking when trying to come up with a solution and is much more manageable.
That CP article was really nice to. Makes dealing with more complex queries a lot easier. I'd like to thank you for your help, both with the VB part as well as the SQL part. Have learned a lot of things and will most likely come back and reference this a lot in the coming time. I'm glad that this question has helped me learn and not just given me a copy paste solution only. So thanks for taking the time to help me. :)
Member 11683251
1-Jun-15 9:29am
View
Updated the question, not sure if it's an actual improvement. I appreciated the help but no stress because my workday is soon over so I will most likely be unable to get back here today. Small kids tend to reduce avalible time. XD
Member 11683251
1-Jun-15 8:33am
View
Yea, just figured out that I needed a schema.ini to do that, found several links that discussed it but none that stated what should go in to it at first. Now I got my different fields. :)
I could try to do that but getting the query to work would be too much work I figured. As new to sql as I am to vb6. The data comes from 4 joined tables with 9 different fields and every time I tried to check for duplicates I just got a bunch of aggregate errors. And that would be only to find the duplicates, then to include selecting between two or more depending on the values in one of the selected columns is a bit out of my league right now.
I thought I could cheat by doing it in the program instead but as this has turned out I'm not sure if I've saved any time. At least I've learned a bit more about the up and coming language called VB6.
Member 11683251
1-Jun-15 7:45am
View
Just pointing it to my file and it opens fine, works like a charm. I'm just having some trouble getting it delimited to get to each column, as it is now each row contains a string including the semi colons. Tried adding FMT=Delimited to this row
connCSV.Open "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=" _
& path & ";Extensions=asc,csv,tab,txt;HDR=NO;Persist Security Info=False"
and it still runs but cant get the values to separate.
Member 11683251
1-Jun-15 6:39am
View
Looks interesting, wasn't aware that you could use recordset for other things than SQL queries. Some other things has come up but will go through this once I've got the time.
Member 11683251
1-Jun-15 6:36am
View
The csv is created by the program and then mailed to certain users. The data come from a SQL 2012 server.
Member 11683251
1-Jun-15 6:06am
View
Sadly yes. Currently the entire system is VB6 and I've got the task of maintaining as well as adding new features. We will most likely move away from it in the future but not this year at least.
Member 11683251
28-May-15 4:52am
View
Thanks! Then my extra log functions wont be in vain.
Member 11683251
26-May-15 8:55am
View
Actually you seem to be spot on. Tried using a bigger picture and now it worked. Seems that I misunderstood what the code did. I assumed that it would just take a source image and then use that as a basic tile and fill out an area depending on how many times you looped.
Thanks! If you post it as a solution I'd gladly accept your answer.
Member 11683251
26-May-15 8:44am
View
50x50, as I said one of them shows up (the first) but not the rest. Manually creating picture boxes also shows up fine using the same image.
Show More