|
In all the languages I've coded, there seems to be a general rule of thumb when looking at linked lists versus arrays and dynamic arrays.
Double Linked Lists are better at adding and removing items a lot - but overall searches and navigation is slower than navigating an array; where an array is much faster for navigating hands down making searching, sorting etc just faster.
Where the dynamic arrays get slow is when adding items exceeds the "chunk" and the array needs to make a full copy of itself in a new memory location with a new empty chunk. By Chunk I mean the number of array elements that are allocated but not yet used.. and as you fill the array.. you eventually hit this limit and adding one more item will cause this "thunking" I'm talking about. (Look up on Wiki what thunking is... the definition kind of applies here).
How .Net implements their dynamic arrays under the hood I am not certain but I would imagine it's not different by much as these are pretty standard data configurations in any system I've used thus far. The fact that .Net has both lists and arrays indicates this is probably spot on.
It seems like everything in programming is a choise of the right tool for the job. Deciding can be tricky - especially when both ways apply.
Good Luck!
Know way too many languages... master of none!
|
|
|
|
|
I created a data structure, SlimList, and wrote an article about it. Read that.
A SlimList is like a .Net List, except it avoids copying any elements on the resize. So, a .Net List will double the size of the underlying array then copy the items over from the old array to the new array that is twice the size. A SlimList will just create an additional array that is the same size as the existing arrays combined, but will not copy any elements over. It does this by creating a "major" array to hold all the references to the "minor" arrays. All operations on on par with List with respect to big-O notation (e.g., an index operation takes O(1) time). However, as it is currently implemented, SlimList is a lot slower than List (about 600x slower in some cases). However, with a little assembly code and some other modifications (that I've outlined in the article), it could be made to be about 2x slower than List. Probably not worth it, which is why I didn't implement the optimizations myself, but it's an interesting data structure that does overcome the array copy issue that exists with current implementations of List.
So I'd say look at SlimList to see what route you could take, but then use List because that's going to be the most efficient route if speed is important.
Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
|
|
|
|
|
All I care about is speed. I've already implemented the list and it is several orders of magnitude faster.
|
|
|
|
|
aspdotnetdev wrote: Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
I'm already over 4 million items in my list and it increases by an order of three at each loop. I'll have to think about this one since I expect it to grow much, much higher than this.
Thanks!
|
|
|
|
|
aspdotnetdev wrote: The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation).
What would happen if you don't have a chunk large enough? Does it fail or wait?
aspdotnetdev wrote: The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often.
Sadly, I need something much, much larger than this size for my array at its largest point in the loop. I think I'm reaching my limit much earlier then that.
|
|
|
|
|
Bassam Abdul-Baki wrote: What would happen if you don't have a chunk large enough? Does it fail or wait?
You get an OutOfMemoryException.
Bassam Abdul-Baki wrote: I need something much, much larger than this size
Then I think you misunderstood. If you have a "major" list that holds 10,000,000 lists that EACH hold 10,000,000 BigIntegers, that would be a total of 100,000,000,000,000 BigIntegers. If each BigInteger is at least 10 bytes, that's about 1 petabyte (read: 1,000,000 gigabytes) of data. The largest commercial hard drives sold today are about 1,000th of that size. Even some versions of 64-bit operating systems don't go that high in virtual memory.
Also, are you understanding that you would treat this list of lists of BigIntegers as a single collection of BigIntegers? The advantage is that it is physically spread into small chunks across the RAM.
Also, if you need more data (for some strange reason), you could create a third level of lists. So, you'd have a list of 10,000,000 lists of 10,000,000 lists of BigIntegers. There is probably not enough storage on the planet to store all that data.
In case you still aren't getting it, here is basically how the petabyte list would look:
List<List<BigInteger>> ints = new List<List<BigInteger>>();
for(int i = 0; i < 10000000; i++)
{
List<BigInteger> minorInts = new List<BigInteger>(10000000);
ints.Add(minorInts);
}
|
|
|
|
|
Thanks! I'm not getting an out of memory exception yet, so I must be good. I might be able to modify my program to only use a BigInteger array of integers or strings (of a maximum length) if I make a few changes. I'm hoping that the program shrinks enough after a number of iterations where the rate of removing numbers exceeds the rate of adding them and it shrinks to zero eventually. We'll see. Even if it works, I may change it for the learning experience.
|
|
|
|
|
In normal case from Control panel I select Regional and language Options and then to the Advance tab to note the language used under "Language used for Non-Unicode Programs"
my question is how to retrieve programmatically by C# the language name used under "Language used for Non-Unicode Programs" AND Change it to different choises ?
Thanks in Advanced
|
|
|
|
|
Unless you're setting up computers to distribute, this is something your app should never change since it changes this setting system-wide, for every app running on the machine.
|
|
|
|
|
You can change it But I dont know how !!!
you can get the info by :
[DllImport("kernel32.dll")]
static extern uint GetSystemDefaultLCID();
uint a = GetSystemDefaultLCID();
|
|
|
|
|
I didn't say you couldn't change it. I said it was a very bad idea TO change it.
|
|
|
|
|
when we create a block inside text editor, by pressing any key, that block disappears,
while existing block, if suddenly u press DEL key, content of that block get detroyed.
i dont want this, i want to have a block remain persist. pay attention i am talking only
about text editor not data manippulate or SQL server or....... just inside of text editor.
|
|
|
|
|
Sorry, but i really don't understand you question...
You don't wan't to destroy the selected text by pressing del or any other key? Is this right? What do you mean with a "block" inside text editor?
But i really don't get what have this with C# together or with SQL server... really strange
|
|
|
|
|
ok, let me explain more
when u r typing a text, if press DownArrow while holding the Shift key, a block will be generated.
BLOCK in text means a selected portion of text ok? after creating a block, if u press for example
uparrow or home or..... that portion of text(BLOCK) will get unselected. i dont want this. i want
that block still stay selected.
|
|
|
|
|
Please don't repost questions so quickly. The original question is only a couple of threads below, and it's not very considerate of you reposting just to bump your post up.
|
|
|
|
|
Did you have a question?
Edit:
Wait, wait, what...? You want a read-only section of a text file? Because you're a clumsy typist? Spend more time doing it, slow down, pay attention to what you're doing. Haste makes waste.
P.S. And you know about undo, right?
modified on Monday, August 30, 2010 11:03 PM
|
|
|
|
|
faraz34 wrote: when we create a block inside text editor, by pressing any key, that block disappears,while existing block, if suddenly u press DEL key, content of that block get detroyed.i dont want this
If it's a RichTextBox , than this can be prevented with the SelectionProtected [^] property. You'd use the SelectionStart and SelectionLength properties to define the part that you want to protect, and than you set the SelectionProtected property to true.
Hope this helps, and my apologies for the rude answers
I are Troll
|
|
|
|
|
I have some collection with object.
Each object need to do some mission.
Is it ok to run all the mission in this way ? ( using parallel and Thread together )
Is this will be efficient ?
Parallel.ForEach(ElementsCollection, element => element.Mission() );
public void Mission()
{
new Thread( new ParameterizedThreadStart( Mission_) ).Start( this );
}
|
|
|
|
|
Message Closed
modified 23-Nov-14 5:59am.
|
|
|
|
|
I know, but using Parallel with in this way will create fester code ?
|
|
|
|
|
Yanshof wrote: will create fester code ?
It may 'fester'[^] but it won't be faster.
DaveIf this helped, please vote & accept answer!
Binging is like googling, it just feels dirtier.
Please take your VB.NET out of our nice case sensitive forum.(Pete O'Hanlon)
BTW, in software, hope and pray is not a viable strategy. (Luc Pattyn)
|
|
|
|
|
Can you pls explain why this will not be faster ?
10x.
|
|
|
|
|
Creating a thread is expensive process and unless the task is very long running, creating and destroyng the thread can often take longer than the task itself.
Once all your cores are busy with threads, any others have to be switched around by the OS. This it does quite efficiently but it means that some threads aren't actually doing anything until they alloted some processor time to do it. This switching can actually make an application run slower if too many threads are created.
DaveIf this helped, please vote & accept answer!
Binging is like googling, it just feels dirtier.
Please take your VB.NET out of our nice case sensitive forum.(Pete O'Hanlon)
BTW, in software, hope and pray is not a viable strategy. (Luc Pattyn)
|
|
|
|
|
But using Parallel will 'cost' less than using Thread ?
Is using Parallel will cost less then create new thread and using it ?
( if i assume that the machine that the application will run on is dual core )
Using parallel does not require some 'content switch' ?
|
|
|
|
|
Hello,
My C# application checks an on-line database for currency when it starts up. If the most current version is not being used, it forces a download of the current version (or shuts down) and places the new .exe file in the System32 folder and then closes the application. Now, when the application is run again, it "sees" that the update has been downloaded but is not installed (So far so good.. . Question is this...how do I shut the program down once it is running (or do I have to?) and also force the new executable to run to update the application? I like the way that Adobe PDF reader does this. Can I duplicate this process with my application? Thank you...Pat
|
|
|
|
|