Click here to Skip to main content
15,886,063 members
Please Sign up or sign in to vote.
1.00/5 (3 votes)
Hello,

Which database is better for MVC, mysql or ms sql server 12? I worked with Mysql it is work fine with Asp.net webforms but now for MVC which one is better? I search on it but answers not make me to decide which one better?
Posted

Strictly speaking, the choice of the database engine has nothing to do with MVC or any other framework on top of ASP.NET. It's enough that it worked well with ADO.NET. This choice should take into account the requirements of your application, which you did not share with us. But please don't do it in response to this answer. This is really the job you need to do by yourself.

—SA
 
Share this answer
 
v2
Comments
Member 11922776 8-Sep-15 1:18am    
Sir, Requirements are database is for taking a continuous readings from devices located at different locations(database is serverside). for each device separate table is maintained. reading parameters are 100 for each devices. after that analysis will be done on database entries.
The question is just because of Asp.MVC or asp.webforms are product of microsoft and ms sql server also. that's why one thought i always listen they are made to work efficient with each others where as mysql is from oracle.
Sorry if my question is silly .
Sergey Alexandrovich Kryukov 8-Sep-15 1:49am    
No, it's not really silly, but can be based on some misconception, which is quite normal.
It can be done with different database engines. That's why I say, you have to compare the specs, match is with your expectations and make the decision yourself. It's not related to a particular ASP.NET framework.
—SA
Sergey Alexandrovich Kryukov 8-Sep-15 1:50am    
No, it's not really silly, but can be based on some misconception, which is quite normal.
It can be done with different database engines. That's why I say, you have to compare the specs, match is with your expectations and make the decision yourself. It's not related to a particular ASP.NET framework.
—SA
Member 11922776 8-Sep-15 3:09am    
ok sir, thanks your suggestion always helps.
Sergey Alexandrovich Kryukov 8-Sep-15 9:07am    
You are welcome.
—SA
As already pointed out the question about the database product is separate from the technology you use in the front end. However, based on the comments you have posted on other answers I would consider few things:

  • the database should have a fast storing engine and a good optimizer
  • it also should be capable of partitioning data when the overall amount of data grows
  • the client side library connecting the database should be mature enough to include both single row and bulk operations
  • the database engine should have good administration capabilities and utilities
  • and most likely you'd need some support at some point of time so that should be available

Based on these and my previous experiences, I'd probably choose either SQL Server or Oracle. Especially because of the maturity of the client library and the administration.
 
Share this answer
 
Comments
Sergey Alexandrovich Kryukov 8-Sep-15 13:00pm    
Good points, a 5.
—SA
Wendelius 8-Sep-15 13:18pm    
Thank you SA
Member 11922776 9-Sep-15 4:46am    
Thank you Mika.
It is better not to think about these two components as inseparable. Even more, design patterns such as repository[^] add an extra layer, to separate business logic from data access. This is a good approach when you need to choose. Of course, as SA mentioned, if you want to use .NET a properly mature ADO.NET driver for that data store is a must. If you have Linq and EF driver, than it is even better - in some cases you will have only these.
And at this point forget tables and the relational model for a second.
Based on your comment you are collecting sensor data. What you haven't spoken about is the frequency: one reading per second, per minute or per hour make a huge difference. Is this continuous reading 7/24/365 or just for a short period? Can you afford batch inserts? If not, can the data store handle the continuous granular inserts in terms of performance. And finally: how big your database will grow (think also about your indexes, transaction logs, and so on)? How can you properly handle that size over time?
And you have to think about your analysis too, what kind of queries you need to issue.
As I see you want a time series database. Unfortunately, there are few dedicated TsDBs for Windows, even less with handy drivers.
You can also take NoSQL in consideration. Many of these implementations are enterprise grade solutions, even with extensive feature sets for free. If you collect 100 parameters per device, and the frequency is not too high, MongoDB or RavenDB for example can be a good choice.
Of course, you can use MySQL too. It is better in such a situation that SQL Server, as you have less resource intensive storage engines to choose from. But you can also think about low profile engines like SQLite. Low footprint, high performance. If you need to do the analysis logic in your application anyway, this is a good choice.
 
Share this answer
 
Comments
Member 11922776 8-Sep-15 3:37am    
Hello Zoltan Sir,

Thank you for guidelines, Now I got what to ask.
about my database:
1)About is the frequency: one reading per second
2)yes,This continuous reading 7/24/365
3)I don't know about batch inserts,continuous granular inserts but when I search on net then from that little info. I think I don't use it, I fired insert query after time interval means 1 sce, 5 sec, 1 mint.
4)how big your database will grow? => It is grow up-till device send data to it.May be continuously for 3-4 years. It is interrupted only by if power supply is off of device, and power is on then it start reading data again.
i)indexes? => which indexes?
ii)transaction logs => transaction logs are maintain in txt file.

5)finally: How can you properly handle that size over time?
This 5th question really matters for me, because I do not have any idea how to deal with or handle the long/big size of database.

please give me detail information specially for 5th question.

Thank you once again for make me way to put my question in well manner.
Member 11922776 8-Sep-15 5:05am    
Sir please reply, mainly for last question, How can someone properly handle that size over time?
Zoltán Zörgő 8-Sep-15 9:35am    
Hello. Well, what you described here is a topic cleary for a TsDB, not any kind of RDBMS.
3) In general, inserting 1000 rows at once costs less (in terms of resources) as issuing 1000 individual inserts
4) You need to calculate the raw data size devices*device sensors*readings per year*years*record size. If you add indexes to the table, that will cost you space and performance. Still you will need indexes if you want to issue queries against such a table. SQL Server's transaction log management is an issue you have to deal with. It is not a text file, and it can grow really big.
5) If you choose any RDBMS, you should look for one that supports partitioning, or you need to implement some logic for partitioning on your own. Both SQL Server and MySQLsupports partitioning, but if you can't afford to drop partitions automatically, they won't fit your needs. INSTEAD OF triggers or stored procedures can be used for even more dynamic partitioning.

Still, this google for "Time series database" - and consider Linux for it.

As this looks an industrial application I would also consider using an OPC server like Kepware, with it's historian. It is highly optimized and dynamic especially for this use.

These could be also something to take a look at: TeaFiles, Deedle
Member 11922776 8-Sep-15 10:42am    
Thanx for reply, Honestly speaking its really tough for me to understand because this is completely new for me, I learned and work with Sql server,Mysql. just creating a database and access it. But never thought about next problem when data increased in big quantity.My general thought is that for such application such problem faced then they simply increase the physical storage device size where database store or make it small(in simply say ZIP it).

so sir, please understand my level and try to make your explanation easy to understand me.

MS SQL server size is 10GB or it is depend on O.S. and system storage capacity? I think its Hard-Disk storage capacity of system.

a lots of silly question arises in mind but answers are always confusing.

once again thank you for reply.
Zoltán Zörgő 8-Sep-15 10:53am    
10GB is the limit for SQL Express only. On a 64bit system, you can have DBs as large as your filesystem supports. Still, on SQL Server 10 years of 1000 sensors with 1/s frequency will cost roughly 25TiB. Bud you can do your calculations yourself.
- Do you have that much space?
- What do you think how long it will take to calculate a MIN/MAX even on a small portion of this? I can tell you: hours, even with indexes... Why? Because an index on a timestamp field that is continuous does not help much...
- You can't ZIP a portion of a 25TiB database.
- How will you achieve data backup? If your extra large storage fails - you will loose everything. Can you afford that?
- But you have to think about the queries you will need to answer... what timespans will those cover?

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900