|
Hi,
Of course I agree with the true/false syntax preference over 1/0.
I want to add one big caveat: in C integer values other than 0 and 1 will fail the
if (x==TRUE) test but pass the if(x) test, yielding some very nasty bugs.
So the recommendation might be:
1. to use !=FALSE rather than ==TRUE
2. to use IS_TRUE(x) and IS_FALSE(x) macros instead
3. to use FALSE and !FALSE rather than FALSE and TRUE (but not ==!FALSE)
None of these could be enforced however.
|
|
|
|
|
Thanks for the input...good to know that I am not the only one who thinks that way about Boolean logic.
[ Don't do today what can be done tomorrow ]
|
|
|
|
|
Hi All,
I am looking for very efficient solution for this problem. Here is my problem description - I have serveral database,
from which I have to pull data (All databases differ in physical design implementation), and insert them in standardized
common database. Let me give you an example
Database - 1
Table - 1
Col_1 - int
Col_2 - text
Col_3 - float
Database - 2
Table - 2
Col_1 - int
Col_2 - int
Col_3 - float
Col_4 - text
Col_5 - text
Common database
col_1 (id) - int
col_2 - int
col_3 - float
col_4 - text
So mapping between database - 1 and common database would be
col_1 --> col_2
col_2 --> col_4
col_3 --> col_3
and between database - 2 and commond database would be
col_1 --> col_2
col_2 --> col_2
col_3 --> col_3
col_4 --> col_4
col_5 --> col_4
How can I achive this? One more thing, this mapping can be changed during runtime.
Thanks,
Ashish Patel
|
|
|
|
|
Well - for a start, you might want to try posting in only one forum. Go ahead, pick one.
|
|
|
|
|
Hi.
I wish to create a 'circular buffer' that one thread can write to, and another read from. The data is going into the buffer very fast, and there is a lot of it. I will probably use malloc() to create a buffer, however I wish to use a large buffer, but due to the speed the data is going in and out, anything that will slow the whole process will result in an incomplete stream being read out of the buffer.
I am very worried about using a too large buffer, and the memory being put into a page file. Could someone please inform me how likely this is, when it will happen, how I can prevent it, the largest memory allocation I can create etc?
Thank you all!!
|
|
|
|
|
For the record I am using VC++ (native). Also it looks as though I may have posted in the wrong category, sorry.
|
|
|
|
|
Hi,
please elaborate on the context: what is large ? what is fast ? and what kind
of data is it (byte oriented, record oriented, fixed record size...)?
|
|
|
|
|
The actual reading & writing is easy. The data is of fixed length, so no problems coding the buffer itself, I just really really cant afford any lost time due to paging or hard drive accesses.
The data will be stored in 13 byte blocks, with data being added at about 53mb per second. This data will simultaneously be read & written to hard disk, the aim is to go as long as possible before the buffer fills up & data/stream integrity is lost.
Thank you.
|
|
|
|
|
Hi,
there seems no need to have just one (large) buffer.
this is how I would go about it:
- pre-allocate a number N of memory blocks of 52KB each (yes exactly 52*1024 bytes);
- have two queues, one with empty buffers, one with full buffers;
- initialize with all the available buffers in the "empty queue";
- let the producer take an empty buffer, and fill it up completely, before
putting it in the full queue;
- let the consumer take a full buffer, empty it, and return it to the empty queue;
take care of performance by:
- not using function calls in producer/consumer other than the ones absolutely
necessary, including one queue_get and one queue_put.
- using a single disk I/O function that transfers an entire buffer at the time
(that's why it best is a multiple of 4KB); and use the lowest-level I/O function
available, in C that might be fread/fwrite or even read/write. Not sure.
- trying to keep the buffers in cache (although they probably would have to
be copied to/from memory to perform the disk's DMA); that's why I do not
insist on having large buffers nor many buffers; I would try not to exceed 1 MB
in total.
- possibly: not using a locking queue; a queue that serves one producer and
one consumer can de devised to lock only on extreme conditions (that is
full and empty), but not while putting/getting an item (use a circular buffer,
a get and a put index; only one party is allowed to write to the get index,
the other to the put index)
- possibly and somewhat tricky: play around with producer/consumer priorities:
if there are more empty buffers, raise the producer; if there are more full
buffers, lower the producer. Or something similar.
- experimenting with N: this is your one degree of freedom, you can add memory
to keep it alive longer (maybe with less cache efficiency), that is to overcome
longer disturbances due to external causes.
If you do this right, I would not be surprised you could make it run "forever",
since 53MB/s is well below today's disk bandwidth. The thing that probably
matters most is what else the PC is doing (networks, optical disks, etc; turn
them of as much as possible!).
I would not be concerned about the paging mechanism inside the PC; it knows
how to handle bigger jobs than this. You only would be using around 1 MB of
memory, and all your use is rather static and in multiples of 4KB, so
dont worry.
One final remark: I would first try a subset of the above, to get things
working, so I could observe how well it already is doing, then decide on further
improvements.
Good luck, and dont hesitate to post more questions, or results...
|
|
|
|
|
Thank you, this gave me quite a few ideas to think about. A previous attempt by someone else 2 years ago managed a run time of about 20 seconds with half as much data, I need to increase by at least 200% and double the datarate. The system will be the same to start with but I am going to try to get it upgraded.
Thank you once again.
|
|
|
|
|
I hope you're running it on a system with high speed hardrives or a RAID array. A 7200RPM drive only has a sustained transfer rate of 66mbps. You'd either need a raid array striped over 2+ disks (with a real hardware controller) or a 15k RPM drive to hit 106mpbs sustained. If you're doing reads and writes concurrently seek time will lower the maximum throughput farther.
--
You have to explain to them [VB coders] what you mean by "typed". their first response is likely to be something like, "Of course my code is typed. Do you think i magically project it onto the screen with the power of my mind?" --- John Simmons / outlaw programmer
|
|
|
|
|
Hi.
The system is currently using RAID stiping across 2 IDE disks. The idea is for a batch system so data is only going to be written to start with, reading will come after the job has finished (initially, depending on performance obtained).
I will be trying to get faster hard drives for the project, I'm not sure how fund are at the moment.
I am having a hard time obtaining sustained drive datarates, where should I be looking?
Thank you.
|
|
|
|
|
Mr Simple wrote: I am having a hard time obtaining sustained drive datarates, where should I be looking?
Dunno, IIRC the 66mpbs number is a benchmarking average. I'd assume performance of single drives would scale linearly.
--
You have to explain to them [VB coders] what you mean by "typed". their first response is likely to be something like, "Of course my code is typed. Do you think i magically project it onto the screen with the power of my mind?" --- John Simmons / outlaw programmer
|
|
|
|
|
|
Have you considered using shared memory? There are implementations of circular buffers using shared memory and they are really fast. If speed is what you need I thing you should consider this solution.
|
|
|
|
|
Hi,
I agree shared memory is fast, but I fail to see how a two-process approach
with shared memory would be preferable over a two-thread single-process approach
using local memory ?
With two processes, you need some interprocess communication (you might use
part of the shared memory for that), but you also have to take care of proper
startup sequencing, and the possibility one process exiting for whatever reason
without the other process knowing it...
|
|
|
|
|
Yes I agree with you on the interprocess communication but sm was only a suggestion.
Maybe the solution requires the existence of two separate processes for some reason, then shared memory maybe a good choice. I am not arguing it will be the best though
|
|
|
|
|
I am working with a device to pull vital information (bp, pulse, etc) into a webpage to be saved into a database. I place the code application and SDK for the software of the device on one computer. I then placed the SDK of the software on for the another computer. If I plug the device into the the one with the code. The code successfully identifies the device and pulls the information into the webpage on the localhost. If I plug the device into the other computer and navigate to the the localhost of machine running the, it cannot locate the device.
What i need to be able to is to load .xml (this is the file the device information is stored) from the client machine and be able to pull the device data into the
webpage.
I have an connect property in the code to locate the xml file. I think I need to get it to read the file from the client machine and process the data via the client machine.
Is this possible and could you put me a path that will help me achieve this; books, articles, etc.
|
|
|
|
|
please help! i need to write java code that uses a hashtable to create a CD catalogue. ive used the artist name as a key and the cd as the value, the cd is a seprate class with instance variables: artistname, title and price. the problem im having is storing each new entry in alphabetical order. my current addcd method is
public addCD( CD cd){
put( cd.getArtistname(), cd);
}
however this does not store it in alphabetical order in my hashtable.
another problem im having is collions 1 artist has many cds, so how can i get the other cds stored by the same artist, ive read that hashtables deal automatically with collions by storing it in a linked list but my hashtable only returns the last cd entery that iv added.
anny suggestions to the problem would be graetly appreciated
|
|
|
|
|
Hi,
I am not a Java specialist, but I assume its collections are quite similar to
the ones in .NET (they would be identical if you were refering to J# instead
of Java).
In .NET a hashtable holds a lot of key,value pairs; it applies a lot of tricks
to handle large collections very fast (that is given a key, find the value).
It does not store things in alphabetical order. It does include all necessary
logic to avoid collisions (thats different keys producing the same hash;
trying to enter two key,value pairs with same key results in an exception).
If you want to traverse a collection alphabetically, you must have an ordered
collection, that either sorts by itself (as in SortedList) or that supports
an explicit Sort operation (as in ArrayList, and List).
So one often ends up having two parallel collections, say a Hashtable for
fast key->value translation, and a SortedList for listing the keys alphabetically.
Hope this helps.
|
|
|
|
|
Hey all,
I'm looking to store sensitive data in an oracle database. Oracle has some neat encryption features that allow you to send a key for decryption purposes. The application ties into Active Record for authentication and permission management.
I'm trying to come up with a way to store individuals personal records in such a way that they are accessible to others (using groups / permissions) but are not kept in plain text. How do you encrypt something that is capable of decryption (DES3 or AES)while not storing the decryption key somewhere accessible to a developer or dba? Leaving a certificate in the development tree isn't an option either because of the nature of the company (no real passwords in source control).
Any thoughts?
Best,
Jon Lebensold
|
|
|
|
|
Have you come up with any solution to this?
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Well, the project is on hold but here's what I've come up with so far:
You store a plaintext string (i.e. "helloworld" in the web.config)
You then setup IIS so that the application doesn't recycle memory.
You store a general decryption key in AppState of the application (essentially keeping the key only in memory)
You use the decryption key to take your plaintext string and encrypt it (so "helloworld" becomes whatever it would be with your AES encrypted string turns it into, with or without a salt) and then you take THAT and place it into the web.config. This way, the actual key isn't stored on the server, the config files or in the database.
I do know, however that microsoft has a tool for encrypting parts of your web.config, but I haven't looked into it (and only discovered it after the architecture I had just proposed.)
When the application first loads, it checks to see if the key in AppState can encrypt "helloworld" to match the encrypted string in the web.config.
In terms of assigning permissions to different users, you could use this same key to encrypt all the strings in the database, including a 1-many mapping of passwords to users with permission to see them.
let me know if you come up with anything better!
|
|
|
|
|
How would you sort an obscenely amount of data? Say 500 gigabytes or more? Any brilliant ideas?
|
|
|
|
|
Using a highly optimised sort algorithm suited to the data and disk based operations :P unless you have 1tb of ram laying around
I suppose a database could do it too.
|
|
|
|