|
Hi,
I'm developing a dll with C#.
The dll opens port, transmits and receives information. It works with RS-232, USB and Ethernet communications.
In this moment, I'm trying with USB communication and I have a problem.
C# shows me an error:
The name 'InvokeRequired' doesn't exist in the actual context.
The name 'Invoke' doesn't exist in the actual context.
I read that dll's don't recognize Invoke and InvokeRequired, and I have this code:
private void usb_OnSpecifiedDeviceRemoved(object sender, EventArgs e)
{
if (InvokeRequired)
{
Invoke(new EventHandler(usb_OnSpecifiedDeviceRemoved), new object[] { sender, e });
}
else
{
}
}
Is there another code to replace the above?
Thanks.
|
|
|
|
|
No offense, but your code looks.. strange
It looks like you're trying to re-fire the event on the UI thread if "required".
You could just invoke "whatever is in the else clause" but that boils down to the same thing.
Except that Invoke doesn't exist, apparently. Does intellisense show Invoke and/or InvokeRequired? Is this code inside a form? (if not, why are you using it?)
If you're trying to re-fire the event on "a thread other than the UI thread" you could give it a queue of delegates that it periodically checks (and calls them if there are any), I know of no other way to "inject" a call into a thread nicely but if there is someone else will post here
|
|
|
|
|
|
How can I make software lock ( or Activation Code) for web applications ?
Your prompt reply will be appreciated …
regards
|
|
|
|
|
I assume you would have to have a database of users and an encryption algorithm for generating valid activation codes. Then track activations depending on the user.
Regards,
Thomas Stockwell
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Visit my Blog
|
|
|
|
|
Please do not cross post.
Christian Graus
Driven to the arms of OSX by Vista.
|
|
|
|
|
|
|
PIEBALDconsult wrote: .IsAlive?
this.Thread.IsDead();
|
|
|
|
|
Alas, I hardly got to know it.
|
|
|
|
|
When I am using switch statements in C# what is the most efficient type of switch variable to use?
Normally I only have less than 10 cases which makes an Int32 seem rather overkill but am I correct in assuming that as the machine is running 32 bits that may be more efficient than trimming the switch variable down to 16 or even 8 bits which may cause more code steps?
Cheers, Bruce
|
|
|
|
|
Probably, test it and let us know.
|
|
|
|
|
Hi Bruce,
in general integer operations are fastest for the native word size, meaning 8-bit or 16-bit operations are not faster than 32-bit operations on modern CPUs.
This tells us byte and short mainly exist to support compatibility with existing data structures, files, etc; and of course to economize on memory when using large amounts of them as in arrays.
BTW: This may not be very easy to test, since (1) programming languages use the native size for literal values anyway, and (2) often the compiler will use int operations although byte or short where coded when those ints are equivalent to what you actually coded.
Luc Pattyn [Forum Guidelines] [My Articles]
- before you ask a question here, search CodeProject, then Google
- the quality and detail of your question reflects on the effectiveness of the help you are likely to get
- use the code block button (PRE tags) to preserve formatting when showing multi-line code snippets
modified on Sunday, June 12, 2011 8:35 AM
|
|
|
|
|
Luc Pattyn wrote: often the compiler will use int operations although byte or short where coded
That's what I would expect; a bunch of up-casting to int.
|
|
|
|
|
Luc Pattyn wrote: economize on memory
Interesting point - I've never delved that deep into the CPU architecture. Are things always word aligned or byte aligned, or is it variable? If it's word aligned then there would be no advantage at all to using anything less than the native size.
DaveBTW, in software, hope and pray is not a viable strategy. (Luc Pattyn)Visual Basic is not used by normal people so we're not covering it here. (Uncyclopedia)
|
|
|
|
|
structs can be laid exactly as required. Each element of an array of bytes comes immediately after the previous. Etc.
|
|
|
|
|
The CPU (assuming x86) doesn't require much, SSE requires 16 byte alignment unless you don't care about an approx 100% overhead.
But the windows (and linux, too) ABI requires alignment of all things to at least their size, and of the stack to "twice the pointer size" (depends on bit-ness, obviously)
The individual elements in an array of bytes can be byte aligned but the starting address will usually be dword (or more) aligned.
And Luc is correct of course, in general.
Some instructions such as div and idiv have a latency and throughput depending on the value of the result, and small types can lead to smaller values (thus faster computation), obviously that is not guaranteed since it depends on the actual values.
For floats, lower precision makes operations such as fdiv faster.
|
|
|
|
|
DaveyM69 wrote: Are things always word aligned
There is the notion of "natural alignment" which states each item should be aligned to its size,
so 2B shorts have even addresses, 4B ints have addresses that are multiples of 4, etc. (although items larger than the int size (long and double) don't need to be aligned (but SIMD data does).
A struct by default would use padding to achieve that when necessary, i.e. it would insert dummy bytes when required. To reduce the size bloat, the suggestion if to orden members from largest to smallest.
The linker and the run-time will allocate objects at a multiple of 8 or even 16B, so a structs that would only need 6B will effectively be layed out 8B apart. Warning: some Win32 APIs expect an array of structs with odd sizes, such as 6. If you don't want any padding, use Marshal attributes with explicit offsets.
Luc Pattyn [Forum Guidelines] [My Articles]
- before you ask a question here, search CodeProject, then Google
- the quality and detail of your question reflects on the effectiveness of the help you are likely to get
- use the code block button (PRE tags) to preserve formatting when showing multi-line code snippets
modified on Sunday, June 12, 2011 8:36 AM
|
|
|
|
|
Totaly correct!
But in these times with an average cpu power of 2 GHz it is not really neccesary to 'bitfuck'. The problem comes when you go programming on external devices, though today most of the pda's and cell phones only have a lack of memory in stead of cpu power.
It's getting tough when you program on microcontrollers, but then you shouldn't uing C#, but assembler!!
|
|
|
|
|
Some forms of "bitfucking" are becoming more important though, since with the widening CPU-RAM speed gap it becomes increasingly important to not address more memory in inner loops than the size of the L2 cache (needing less if of course even better), if "bitfucks" are needed to accomplish that then so be it.
And since the conditions for store-forwarding are very restrictive, extracting smaller types from within larger types at a non-aligned point should always be done with a "bitfuck" - performance will suck if you write it to memory and read a smaller and unaligned part of it back. The reverse, inserting a small type into a large type is even worse, it is never store-forwarded so "bitfucking" is always needed unless the code is not in a performance critical section.
|
|
|
|
|
You are true, and you always have to bitfuck wherever you can I agree with that. Maybe it's because I like it, maybe because I started with microcontrollers.
But if we go and calculate, we will see that it doesn't really matter which types you will use.
An average to small cache size is 1 Gb, this means. Let's say your program will use half of it so this leaves you to use 500 M of bytes. If you're an efficient programmer you can never make your program to use the whole 500 M. Even if you store numbers which you could put into a byte in an Int128 it'll take you an array of (500/16 =~) 30 million to fill it.
The only way to fill your RAM is when you're busy with graphics. And even then it's not neccecary to bitfuck, because Microsoft made some good library's which will all take care of the memory problem.
In conclusion I think we can say that bitfucking is fun, but not really neccesary if you're just making efficient code.
|
|
|
|
|
Deresen wrote: average to small cache size is 1 Gb
Where did you get that information? The biggest cache size I've seen so far is 12MB (as 2x6MB) of L2. That is not so much, and every so often an other program will come along and trash it (if you don't do it yourself)
|
|
|
|
|
My mistake, I was thinking about RAM memory. <shame on="">
|
|
|
|
|
Well then 1GB is indeed small
But RAM is slow (compared to the CPU), a cache miss can easily cost 100 cycles - long enough to justify doing complex calculations just to avoid the cache miss, plenty of time for 150 to 250 instructions
That makes me wonder what the theoretical maximum of instructions in 100 cycles is (on Core2)
500 if looking only at the predecoder specs: macro fusion can fuse 2 instructions but can only be done once per cycle, so 5 instructions (3 of which must be 1 µop) and the size of such a "block" should satisfy N*size = 16 (to Never cross a boundary) and no 66H or 67H prefixes should occur anywhere
But then looking at the rest: the sequence must not have any dependacy chains, not all instructions are perfectly pipelined, there are only 3 "normal" ports (0, 1 and 5) and even register reading can be a bottleneck.
Only 6 µops per cycle are allowed, but that includes memory read/write (bringing us down to 400 except for µop fusion)
The predecoder throughput would be less important (only the first iteration) if we were executing a small (less than 4 times 16 bytes) loop..
And I'm not even going to mention the rest.
The best throughput of any one instruction is 3/cycle (a stream of NOP's for example) so it should be possible to do (slightly) better than that, right?
This is too complex, I'll leave it to the pro's.
|
|
|
|
|
I don't know about faster but constants would increase maintainability.
While there is no speed decrease it is poor practice to use strings as a switch variable.
|
|
|
|