|
Disclaimer: this approach is rooted in Domain Driven Design, which is a common approach to NoSQL structuring. This is a super-rough look at entities vs value types in DDD.
I'm generally in agreement with @lw@zi, but it depends on your use case of states. If a data structure needs to be a domain-level entity, or is going to be directly referenced by multiple domain models, then it should have it's own store; otherwise it should be nested in a parent; it really has nothing to do with how much data is being tracked by an individual data structure.
If you will only ever present states in conjunction with countries, such as form fields and address resolution, without additional selection vectors then there is no reason to make it a reference and give it its own table.
If states are important on their own in the domain model or if multiple vectors might be used to access the data, such as if you have references to specific state agencies (NY DMV vs NC DMV, etc.), then you should make it a reference.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
For now, it's just a Country and state dropdown. Change the country and their states / provinces load in the states dropdown.
But I get what your saying. I'll have to study Domain Driven Design / Domain Level Entity.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
I'm learning here, had to change the model again because I haven't wrote the states yet in the country document, and scratching my head on how to write the states. This is really different.
public class WEBSITE_COUNTRIES
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public ObjectId Id { get; set; }
public string DisplayId { get; set; }
public string LongName { get; set; }
public string ShortName { get; set; }
public bool Enabled { get; set; }
[BsonIgnoreIfNull]
public IEnumerable<WEBSITE_STATES> States { get; set; }
}
{
"_id" : ObjectId("5b80576d4989bc2bfcf8ffc9"),
"DisplayId" : "5b80576d4989bc2bfcf8ffc9",
"LongName" : "Canada",
"ShortName" : "CA",
"Enabled" : true
}
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
Don't get wrapped around the axle; just treat it as you would a normal object with a collection property.
In that vein, the "BsonIgnoreIfNull" attribute isn't really necessary, it'll just serialize as "States": [] if there's no entries, and won't introduce any unexpected behaviors. Given your use case, and that you're just starting out with this, KISS will make life much easier.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
OK
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
What is with the "enabled" then?
Also if I was designing that per your requirements I wouldn't put in in the database at all.
I would have a file system document that provided the lists. Then a class to load it. That way if this is a real business, and this is just the first pass, then I already have an API (the class that loads it) to replace with a real external 3rd party source of address data.
|
|
|
|
|
That's a good idea, that data will rarely change.
I'll do that instead.
The enabled is just legacy thought from the past.
I've had customers tell me to remove eg. Hawaii from the list because of the high theft rate.
And maybe I should consider just putting it all in a console app like Richard mentioned.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
jkirkerx wrote: I've had customers tell me to remove eg. Hawa
That would be a customer specific configuration then. So something that overrides the base.
|
|
|
|
|
hi all... i am having data in transaction table. the table structure is
trans_Id, Order_Id, Item_No, Order_Qty,Supply_Qty,Trans_Type,amnt_Climed_Type,amnt_Climed_Prcntage
Generally we are getting orders from client and based on the orders we are supplying material and at the
time of supply itself we are raising invoice. The invoice claiming conditions are based on the order terms and conditions. the conditions are like this.
1) total quantity with total amount
2) partial quantity with toal amount
3) total quantity with partial amount ( like 10% on supply)
4) partial quantity with partial amount (like 30% of supply with 50% of amount on supply)..
the transaction table contains data like this.
-------------------------------------------------------------------------------------------------------------
trans_Id | Order_Id | Item_No | Order_Qty | Supply_Qty | Trans_Type | amnt_Climed_Type | amnt_Climed_Prcntage
-------------------------------------------------------------------------------------------------------------
1 ordxyz 1 500 NULL O NULL NULL
2 ordxyz 2 1000 NULL O NULL NULL
3 ordxyz 3 100 NULL O NULL NULL
4 ordxyz 4 700 NULL O NULL NULL
5 ordxyz 5 600 NULL O NULL NULL
6 ordxyz 1 NULL 500 I F 100
7 ordxyz 2 NULL 300 I F 100
8 ordxyz 2 NULL 700 I P 30
9 ordxyz 4 NULL 500 I F 100
10 ordxyz 5 NULL 200 I P 70
11 ordxyz 5 NULL 150 I P 40
12 ordxyz 5 NULL 200 I P 30
13 ordxyz 5 NULL 120 I F 100
---------------------------------------------------------------------------------------------------------------
Trans_Type --- order or supply (o- for order) (I- for Invoice)
amnt_Climed_Type---- full amount climed or partially climed ( F-FULL 100% CLIMED , P-Partial amount climed.)
in the above data total we have 5 orders.. on that,
FOR item_No.1(row 1) full quantity supplyed(in row 6) and rised invoice 100% so item_No. 1 is completed.
FOR item_No.2(row 2) 300 quantity supplyed(in row 7) and rised invoice 100% so item_no.2 300 quantity completed. balance is 700 quantity
FOR item_No.2(row 2) 700 quantity supplyed(in row 8) and rised invoice 30% so item_no.2 700 quantity completed. but againg we have to clime same quantity for 70% (pending)
FOR item_No.3(row 3) 100 quantity not supplyed so item_no.3 (pending)
FOR item_No.4 (row 4) 500 Quantity supplyed (in row 9) and risied invoice for 100%. so 200 quantity is pending (pending)
FOR item_No.5 (row 5) we climed 200 quantity for 70% (in row 10) and again climed 200 quanty for 30% (in row 12) so this is also completed.
FOR item_No.5 (row 5) we climed 150 quantity for 40% (in row 11) and remaining 60% not climed (pending)
FOR item_No.5 (row 5) we climed 120 quantity for 100% and this is completed.
FOR item_No.5 (row 5) total 130 quantity has to clime for 100% and 150 quantity has to clime for 60% (pending)
now i want the output like only pending orders and pending amount orders for climing. i.e
---------------------------------------------------------------------------------------------------
Order_Id | Item_No | Order_Qty | pending_Supply_Qty | pending_Prcntage
---------------------------------------------------------------------------------------------------
ordxyz 2 1000 700 70
ordxyz 3 100 100 100
ordxyz 4 700 200 100
ordxyz 5 600 150 60
ordxyz 5 600 130 100
------------------------------------------------------------------------------------------------------
Please help me regarding this... Thanks in advance.
|
|
|
|
|
There is a lot of talk about it on my radar. I just implemented it on a .Net Core 2 Angular App.
I'm not sure what to think of it yet, still too early as I've only written 15 records or documents to it.
Very different indeed.
I like the part where I didn't have to create the table, and was able to just write to it.
I'll have to write the Angular backend to it first so I can read the data.
Just curious.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
I like it; I'd say it is the document-db version of sqlite
..but would not use it for records or relational data. Just documents and other blobs.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I used SQLite on the current version of my website and it works great!
Glad to hear the comparison between the two.
So for small stuff like say:
Countries/States
Messages
Notes
Portfolios/images
it's fine
But for large relations use SQL
Products/Categories/Vendors/Filters
Store Orders
I'm just trying to get an idea of how far I can push MongoDB without reinventing the wheel.
MongoDB would take the load off SQL Server.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
jkirkerx wrote: So for small stuff like say: Also for large documents. A MongoDB cache can span several computers, making it ideal for static blobs that are to be served.
SQL is usually optimized for relational data that changes frequently. MongoDB is optimized to serve cached blobs.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
oh!
Like image data?
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
Yes, images are a form of a blob
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Q:
Would I have to use gridFS to store images, for the 16mb limit.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
That's what the manual[^] says;
In MongoDB, use GridFS for storing files larger than 16 MB.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Thanks Eddy!
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
You're welcome
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
jkirkerx wrote: Would I have to use gridFS to store images, for the 16mb limit
Yes, that is basically the point.
However that isn't really the differentiator. If you are storing binary data of any sort then you only have two options: encode it (like base64) or store it in gridFS.
If I needed to store streaming video, for example from a security camera where I need to keep the data for long periods of time and provide a way to sequence through it with time searches then I would at least consider Mongodb as a strong contender for just storing the video. I would test it but I presume it is adequate. Although excluding other business needs I would also look into cloud file storage as well (security concerns might preclude the cloud.)
|
|
|
|
|
Eddy Vluggen wrote: Also for large documents. A MongoDB cache can span several computers, making it ideal for static blobs that are to be served
Where "large" means what exactly? I was working with a Mongodb instance that would serving up to 100 mb documents although normal ones probably ranged from 100k-2mb.
I didnt' do much to specifically manage that.
At least at one point I was managing documents on a SQL database (Oracle I believe) where the documents were stored in the file system and the database only kept a URL. That of course is exactly what I am doing now with AWS cloud and S3 storage.
|
|
|
|
|
jschell wrote: Where "large" means what exactly? No specific threshold; means that it is optimized for blobs, nothing else.
jschell wrote: At least at one point I was managing documents on a SQL database (Oracle I believe) where the documents were stored in the file system and the database only kept a URL. MongoDB can also be spread over several computers. If it fits or works, use it
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
If you do not know SQL and do not want to know SQL then it works.
In large business application domains it will not be adequate. But those often use disparate and not complementary technologies anyways.
NoSQL **DOES NOT** eliminate the need to manage the database. Never sure where people come up with this claim. Probably because they only use it for toy applications. ANY persisted data store will, over time, require maintenance. Things change over time and that means the data store(s) must be modified to fit those changes. And nothing fixes that.
Ignoring the need of future modifications only makes the need when it arises more difficult.
Getting the data model right is a problem unless you have experience in the business domain. Correcting the data model with NoSQL falls into the problem that I discussed above and certainly seems harder to me in NoSQL. But then I have been using SQL for a very long time so creating such solutions via SQL are going to be easier for me.
NoSQL has no advantage over normal business entities. Things like users, accounts, etc. Again that is more based on longer term business needs and because there are many, many tools that exist to use SQL and thus use such entities. For example to make money account might want to query the customers in the database - via SQL. Since NoSQL has no standards for API the usage of such tools for NoSQL is not as complete. But Mongodb has the advantage that it is the leader so such tools might support it.
Constraint violation is severe problem in NoSQL. It is a severe problem in SQL as well, unless you add the constraints in the first place. And doing that right can be complicated. There are no transactions either. Which means you need to deal with rollbacks if there is an end to end problem. Not an insurmountable problem unless you don't plan for it in the first place. But incorrect usage of transactions can lead to weird and very hard to deal with problems in SQL.
In one Mongodb application I considered that the only way to fix the constraint and transaction problem was to write a server that wrapped it and presented a standard API. It would manage transactions, rollbacks and constraint checking. Of course at that point what I would have had was in fact a relational database (so pointless to have NoSQL.)
I like SQL Stored procedures. Can't speak to any NoSQL solutions except Mongodb but their solution I do not considered a swap in replacement for stored procs. It is more just a way to moved normal business processing to the Mongodb server.
|
|
|
|
|
I've been using SQL for over a decade, and NoSQL is sort of a head scratcher to me.
I'm just using it for training on my personal website to see what it's all about. See why it's on so many job postings. I currently use SQLite on my personal website on .Net Core 2.1 and it works great.
But MongoDB seems lightweight and can be used by any project that has access to it, which means I can use it from Docker Containers. Not sure how to use SQLite from Docker Containers.
Currently, I'm just storing contact messages from contact us. was going to try reviews and portfolios which would be more complex and your right, I would have to get the class model right in order for portfolios to work. I may give up on it and go back to SQLite.
Overall, this Angular 6 project wrapped in .Net Core using MongoDB has been a pain in the butt.
Now I see why there are so many job postings for the .Net Core Angular 6 person with MongoDB a plus. It may not be achievable on an industrial scale.
I can see storing Countries, States, Notes, reasons, selections on it to take the load off a SQL server.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
I love NoSQL approaches, but you need to understand the nature and character of the data store that you're using.
I don't know about you, but when I plan the persistence models for domain objects that will be stored in a SQL store, I make a point of trying to generate relationships wherever practical. I know that component parts of a data structure may or may not be independently modified and I just generally strive for de-duplication. The end result, though, is a persisted model that does not remotely resemble my in-memory data structure.
What Mongo does, specifically, is to store a BSON (or Binary JavaScript Object Notation) representation of your data structure in a format called a document. It comes much closer to representing the domain model in the database as your application does in memory, albeit as dynamic types until de-serialized. By and large, if you're familiar with JSON and how your language of choice formats to it, you'll understand instantly how to work with data in a Mongo database.
Moreover the stored data works pretty much just like standard OOP objects: they can contain value types or reverence types. In this case, though, the reference types point to a Mongo store for a different data type with a UUID rather than a memory location. This means that you need to have a solid plan about which objects need to be persisted to maintain a cohesive database.
Now I'm not going to really disagree with Eddy, Mongo IS intended to store BLOBs, but the BLOBs in question are these BSON objects, generated from your classes/structures and squirreled away; not necessarily just images or videos.
As a programmer (and not a DBA), this approach makes more sense to me and is infinitely easier to work with, but is absolutely not appropriate for all applications. For instance, this approach makes sharing data between applications considerably more difficult, and sharing data between languages next to impossible if the serialization strategies are at all different. If you need granular control over edits to specific fields or sets of fields, that must be strictly relegated to code, rather than being able to assign permissions at the database level. While data can be related, but is not relational in nature, it's not fantastic for fine control over update cascades; it's not what I would pick for a strictly CRUD app.
Anyway, have fun with it and see how it fits you. IMO it's a great tool for the box.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|