|
Is there any tool/solution to export the tables and then data from SQL Server to SQLite?
|
|
|
|
|
|
I know that tools, they simply don't run on my machine. They are C# apps and probably needs some net framework to install. But instead of digging to see why they are not run, I am concentrating to a reliable tool/solution (which I didn't find yet).
|
|
|
|
|
So basically, you've put as much effort into getting the tools you've found to run as you have into asking your question: ie: none.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
No, that's not true. For instance, I have installed a net framework for a C# application, but still doesn't run, so, what can I do? To find out why that application is not working?
modified 25-Jul-24 5:17am.
|
|
|
|
|
_Flaviu wrote: No, that's not true.
I suggest you re-read your own messages, remembering that nobody here can see your screen, access your computer, or read your mind. Do you really think you've put enough effort into describing the problem(s) you're having for someone to be able to help you?
From my perspective, your thread so far reads as:
Summary: I want a tool to do «x».
I've tried some tools - I won't tell you which - and none of them worked, probably because I didn't read the system requirements or install the prerequisites.
Rather than trying to fix the errors, or tell you what the errors are and ask for help with fixing them, I've added those tools to a secret shitlist of "unreliable tools".
I now demand that someone provide me with a tool that isn't on my secret shitlist - and no, I won't tell you what's on it! - that works first time, with zero effort on my part.
Remember, the effort someone is going to put into answering your question is directly related to the effort you put into asking it. If we have to keep dragging tiny snippets of information out of you in dribs and drabs, then why should we bother trying to help?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
|
I have succeeded to migrate a table from SQL Server by importing it the table to Excel, and from Excel sheet to SQLite using Python. The relations between tables I guess I should do it manually.
|
|
|
|
|
Insufficient key column information for updating or refreshing problems is displayed
|
|
|
|
|
|
Aside from the fact that you're still using software which is a whole quarter of a century out-of-date, and has been officially "dead" for over two decades, you haven't provided any information that could be used to help you.
Rather than giving us your "summary" of the error message, provide the full error message, and explain precisely what you were doing when you saw it. If it's cause by code, then show the relevant parts of your code.
You also need to explain what you have tried and where you are stuck.
With the limited information provided, the only "help" we can offer is to paste your error into a search engine.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: Aside from the fact that you're still using software which is a whole quarter of a century out-of-date,
Just noting that VB6 is still 'good' at least through EOL for Windows 11. Won't be improved but it won't stop working either.
Support Statement for Visual Basic 6.0 | Microsoft Learn[^]
|
|
|
|
|
By that argument, maybe we should all start our next projects in COBOL?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: all start our next projects in COBOL?
To be fair COBOL has new spec in 2023 and one can create AWS Lambdas with it also.
So perhaps Pascal is a better comparison?
|
|
|
|
|
Then ALGOL-60 would be much better!
|
|
|
|
|
jschell wrote: Just noting that VB6 is still 'good'
And that's when the violent vomiting started.
|
|
|
|
|
"Good enough for government work" - especially up here in North Yorkshire[^].
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Confirming the speed and mentality of government!
|
|
|
|
|
I'm doing some development in SQLite and I have data, specifically databases a la Mozilla (the Firefox front office complex) but in order to clear my browser history, something that I think will happen automatically if I dare upgrade/update, my cache of "data" will be destroyed and all my bookmarks will be lost as well as my extensive history.
Whether I'm right about the loss happening when I do anything to the current browser or not, does anyone know if an update will fix the SQLite functionality used by the mozilla implementation, specifically whether the "unixepoch" date/time function which was introduced around 2022 will be at my disposal?
[EDIT]
"DB Browser for SQLite" tells my it's using SQLite v.3.27.2 and "unixepoch" is not there yet.
[END EDIT]
modified 6-May-24 15:04pm.
|
|
|
|
|
Not sure how relevant this is to your situation, but here's my 2¢:
Firefox browser is normally very good at preserving bookmarks, history, etc over software updates. I have some that are many years old having survived tens of upgrades.
(btw the current Firefox is about 125.something, so 72 is waaay behind.
Also its storage of such things is neatly tucked away in a profile folder (location is OS-dependent), so you can always squirrel away a copy of that folder.
And you do have a current backup of everything relevant, of course.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
RedDk wrote: current FFB is now 72.0.2
Not sure where you're getting that from; the current release is 125.0.3, and even the ESR release is 115.x! If you're still running v72 or v68, then you are putting yourself at risk, since there have been a lot of security vulnerabilities patched since then.
RedDk wrote: if I dare upgrade/update, my cache of "data" will be destroyed and all my bookmarks will be lost as well as my extensive history
I've been using Firefox since a very early version, and have literally never seen that happen. Even the "refresh Firefox[^]" option would preserve your bookmarks, history, passwords, and cookies.
If you're paranoid about losing your data, you can always backup your profile folder:
Back up and restore information in Firefox profiles | Firefox Help[^]
But presumably you're already doing that?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I develop trading systems for stocks and other securities. To manage price data from various sources, I use a custom program written in VB.Net. This program maintains a LiteDB database for each underlying, resulting in several dozen database files.
I've found that LiteDB is very slow when inserting large amounts of data, and since I import a significant amount of data, I'm looking for a more efficient solution. Currently, I'm considering switching to a relational DB system like SQLite or MS SQL Server.
My data model includes a class for price data from CSV files, which contains the following fields:
"Identifier" (text)
"TimeStamp" (date/time)
"Open", "High", "Low", "Close" (single precision floating point numbers)
"Size" (integer)
The issue is that the identifier is identical for several hundred thousand data points, and the timestamps and price values often repeat. Since this data is distributed across various instruments, duplicate entries occur.
One possible solution would be to distribute this data class across multiple tables to avoid duplicate entries. However, I'm concerned that this may cause performance issues when inserting several hundred thousand data points.
Has anyone had similar experiences or advice to share?
Thanks in advance!
|
|
|
|
|
Are there no other NoSQL databases you can try besides LiteDB? If you go the relational route and attempt to normalize your flattened data being inserted, it's not going to get faster. Not sure if your important is mean to be realtime or not. So, this is just general-purpose ideas...
These would be your options:
1) Are you using connnection pooling or opening an new connection with every insert? Perhaps that's the bottleneck. LiteDB may not have the concept of connections at all, but if it does that's the first place to check.
2) Can you toss in more threads to this import process? Will LiteDB even handle concurrency or will it choke?
3) Determine why your current LiteDB is choking. Is the bottleneck in your code or the DB? Is there a locked transaction not working? Is it thread safe? Are you using more than one thread getting locked? etc.
4) If the above doesn't work, find a different NoSQL DB that doesn't choke. Get a real one and not one shipped as a DLL once meant for concurrency as it'll have the best throughput even on a single user. I've used MongoDB, I'm sure there are others.
5) If none of that works, then go download MariaDB (MySQL fork), but make sure this import table is using ISAM storage. SQL Server will not be anywhere as fast as this as it doesn't allow you to choose storage engines. This will be for unnormalized data only that supports nothing fancy like triggers, constraints, and foreign keys, but ISAM is fast preciously for that reason. And, you can always have another import process/ETL transform the data if needed in non-realtime.
6) Since it's just an import, the fastest (but last resort) solution would be to just write out the data into an appended binary file. You'd still need a process to import it into an actual DB, but that can be offloaded so the original process isn't bottlenecked.
Rene Rose 2021 wrote: One possible solution would be to distribute this data class across multiple tables to avoid duplicate entries. However, I'm concerned that this may cause performance issues when inserting several hundred thousand data points. Duplication isn't an issue as long as your tables aren't indexed... for writing. Most NoSQL databases don't index, so chances are you're good. Reading is a different story however.
Jeremy Falcon
|
|
|
|
|
Cutting to a chase, both SQLite (always) and MS SQL Server (for development, certainly) are free so assuming that you've got your experience writing SQL in LiteDB and all your code ... it should be rather easy to convert to either one of these, and without much ado, find out for yourself whether by using them they solve the problems you have.
modified 20-Apr-24 15:20pm.
|
|
|
|
|
Rene Rose 2021 wrote: The issue is that the identifier is identical for several hundred thousand data points,
Err...that is not a 'database' problem.
It is a architecture/design problem.
Can you store duplicate rows in a database? Yes.
Should you? That is architecture/design question.
Certainly if the rows are exactly the same then there is no point in storing them. But even in that case you can.
Rene Rose 2021 wrote: when inserting several hundred thousand data points.
Over what period of time?
One day? Easy. Any database can handle that.
One second? Then yes that is going to be a problem. But even then if that only happens once a day it is not a problem. But if continuous then yes it is a problem.
Lets say it is continuous and you do 100,000 a second and each row is 100 bytes.
Size = (24 * 60 * 60) * 100,000 * 100
If I did the math right then that means you are storing 800 gigs a day.
So if every day then for a year you are going to need a petrabyte of storage for each year. So you might want to check your cloud pricing options before you dive in.
Not to mention that only using 100 bytes of data for one row is very small.
|
|
|
|