|
Thanks with the SS_NOTIFY
I can process the ::onCtlColor use the CWnd::GetDlgitem (for proper control) and
SetbkColor
SetTextColor
and SetWindowText of the controls before the Dialog which is the parent to the control is displayed.
Thanks
|
|
|
|
|
I have been using Windows Event Log in order to catch events with a C++ Project on VS2008 for a couple of years.
However, recently(means a couple of days), i cannot catch them on Windows 8.1. I have no problem with Windows 7 or Vista.
Subscribe function "EvtSubscribeFn" returns success but callback does not work.
|
|
|
|
|
Could it be a "permissions" issue? Could the event log be full?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
modified 24-Jun-15 10:23am.
|
|
|
|
|
Hi David, it is not a permission issue. Code is working both windows vista and windows 7. Also current code had been worked since one years on windows 8.1. This is not sense. We have checked latest windows updates, but we have nothing.
modified 24-Jun-15 11:53am.
|
|
|
|
|
Hi all,
this is my first question in this forum. I'd like to begin it by saying hello to everyone.
I am writing a Win32 application that uses multiple sensors, such as the Kinect V2, and a couple of other sensors that complement Kinect's capabilities. These other sensors are interfaced with the PC via the Serial Port. Both the Kinect and the other sensors produce data samples periodically at similar time intervals.
The challenge I face is how to best pull all of these data samples together into my application. What I mean by that is how to best structure my program in order to get the data in effitiently. I think it's a Producer-Consumer type of a problem.
The idea I have is to read the data samples from each of the external sources in a separate Reader Thread. Each of these theads would fire an event when a new sample has been received. The Main Thread would pick up these events using the WaitForMultipleObjects() function, as described [here]. This should provide enough synchronization.
The Main Thread, I call it a Producer Thread, would copy all of the arriving data into a custom frame class. This frame would then be pushed into a FIFO queue. A Consumer Thread would pop and process these custom frames from the FIFO at full speed.
Yes, I realise that this application is resource-hungry. The Consumer Thread pops frames at a lower rate than the frames are pushed onto the FIFO. The PC I develop it on should be able to handle it with with 16GB of RAM and the latest i7 CPU. Real-time operation is not required. Also, the program should complete before RAM fills up.
I wonder if my approach and more importantly the thought process I have is correct? I wonder if there would be a better approach for this type of an application? I don't seem to see a better way.
Thanks,
MW
|
|
|
|
|
Quote: The Consumer Thread pops frames at a lower rate than the frames are pushed onto the FIFO Why?
Quote: Also, the program should complete before RAM fills up It all depends on what the program should do with collected data, you know that if (as you stated) the consumer is slower than the producer then the memory will be eventually filled up.
|
|
|
|
|
The reason why I say that the consumer pops frames at a lower rate than the producer pushes them into the FIFO is that the consumer must process each frame. This operation is time consuming. In fact, the producer pushes approximately 3 frames into the FIFO during the time it takes the consumer to process 1 popped frame.
I've written a couple of test programs to test some of the most time consuming tasks as well as how many frames would actually be needed to complete the task. This is why I made the above statements.
That's not the main concern though. I am more curious about the general approach to designing the application. I've got the individual bits and pieces working. Next, these need to be put together, integrated. I wonder about how to best achieve it?
|
|
|
|
|
It looks a fairly good design to me.
If you cannot speed up the frame processing, then you might consider dropping some of them (if it is a viable option).
|
|
|
|
|
If he produces three for every one he processes, I'd imagine he's going to have to figure out a way how to process them faster or drops will have to occur.
|
|
|
|
|
Unless memory is enough to buffer all the needed data.
|
|
|
|
|
Assuming it's not a process that goes on forever. Usually with sensors, it's an ongoing process, they're always producing data, so if you're not using it all you have to do some sort of smart data reduction (i.e. drop if it makes sense, decimate if it makes sense).
|
|
|
|
|
Only thing that isn't clear is whether you need a "reader thread", implying you'll read all sensors in series one after the other. This can be optimized if the read operations can be done in parallel. It's really application specific so I couldn't tell you if you can or can't do that.
On the processing side, if each frame is identical and processes similarly, that can processed using a thread pool scheme, where you have a set of worker threads that process data when available. That works really well in cases where the processing required on data is identical (i.e. the work function is the same, but you can have multiple independent threads working in parallel on independent data). Again, parallel processing here is application dependent.
|
|
|
|
|
The idea with thread pooling sounds very interesting. I'm not sure how applicable it is though.
I should have provided a little bit more specific information. Apologies.
The other sensors communicate with the PC via Bluetooth asynchronously. Each of them sends a couple-of-bytes-long data packet. All work at rougly the same speed. The pakets arrive in random order. It's not a problem as long as all the most recent packets are received.
In terms of the Kinect, I use almost all streams except for sound and color.
The idea is that once all of the samples have arrived, including the multi-source frame from Kinect, their respective readers would fire an event. Once the WaitForMultipleObjects() function sees that all the expected events have fired, it unblocks and the data is copied into a custom frame class before being pushed onto the FIFO.
On the consumer side, the things look a little bit more interesting. I can't afford to drop any frames from the Kinect. One of the heaviest tasks that need to be carried out is to run the Kinect Fusion algorithm. It runs best on the GPU. I am not sure if this task can be parallelized on a standard PC. Fusion runs way slowlier on the CPU. Maybe it would be possible to run two instances of the Fusion, one on GPU and the other on CPU, but I don't know how much sense it would make.
Obviously, one of the bottlenecks is the throughput of the given GPU.
I'm trying to develop this program in such a way that its performance would vary depending on PCs specifications, in particular the GPU and RAM. Poorer machines would process slowly whereas better ones would give up to real-time performance. Some of the top gaming PCs can run Fusion at Kinect's fps.
From what I can see, the consumer side would seem to work out best to as a straight serial operation. Basically, it would be something like:
1. Pop frame from FIFO
2. Preprocess it (include other not time-consuming processing)
3. Pass the frame to Fusion.
4. Loop back to 1. if not complete.
I hope it's a bit clearer now what kind of an application it is and what sort of requirements it would have.
I am not an experienced professional coder . I just use common sense. The best structure to this program that I was able to come up with was the one described in previous posts. I don't seem to see a better way of structuring it. I greatly appreciate your input guys. I look forward to seeing more opinions, suggestions etc.
Thanks,
MW
|
|
|
|
|
Member 11703498 wrote: I can't afford to drop any frames from the Kinect
I don't know how you can say this and also say that you're producing data faster than you can process it. You HAVE to decimate if you're not keeping up. You'll be dropping data even if you don't want to once your queue is full. Best to deal with that some way or another so the results are predictable.
Member 11703498 wrote: It runs best on the GPU. I am not sure if this task can be parallelized on a standard PC. Fusion runs way slowlier on the CPU. Maybe it would be possible to run two instances of the Fusion, one on GPU and the other on CPU, but I don't know how much sense it would make.
GPU's are powerful because they can run real-time and use parallelism well, make sure you're taking advantage of that.
|
|
|
|
|
Albert Holguin wrote: Member 11703498 wrote: I can't afford to drop any frames from the Kinect
I don't know how you can say this and also say that you're producing data faster than you can process it. You HAVE to decimate if you're not keeping up. You'll be dropping data even if you don't want to once your queue is full. Best to deal with that some way or another so the results are predictable.
Not necessarily.
Dropping frames is not a good idea when they are used by the Kinect Fusion algoritm. This algorithm simply fails if the consecutive frames supply data that differs too much from frame to frame. This typically happens when frames are dropped or the Color stream causes Kinect's frame rate to drop to approx 30fps/2 (it's just a feature, or downside one should say, of this particular sensor).
The queue won't fill up. The system is required to have enough RAM available to the program. Also, the task should complete with a certain amount of frames stored in the queue. The nature of the application is to do a one off job, not a continuous one heavy lifting. If the task fails for whatever reason, say the amount of acquired frames was insufficient, or something along those lines, then the task can be repeated.
I am not that proficient at parallelizing stuff on GPUs . Fom what I've seen, the Fusion algorithm utilizes the GPUs resources to the max. A good GPU would actually make the Queue and high RAM requirement redundant. For the time being though, the best approach I can see is to stick to the use of the Queue and large RAM.
|
|
|
|
|
Hi community,
i use this example from codeproject:
Creating your first Windows application[^]
to learn more about SDI Programing, and later about MDI.
Now i try to add a CListCtrl to show some Data in the list, but where to initialize a CListCtrl as a member?
In my class derived from CView, or something else?
In Dialogbased Application i know how to do this, but not here
Or is here a example to learn more about SDI and MDI programing?
Best regards,thanks for any help
Arrin
|
|
|
|
|
Using the MFC Application Wizard to create the project, you opted for Single Document, correct? On the Generated Classes screen, you could have CListView as the base class for the view.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
Hi,
thank you very much, it works , my first listview
Now i can extend my App an learn more
Best regards
Arrin.
|
|
|
|
|
Academic question.
<b>Which do you prefer</b> - using preprocessor directive #define to replace case state - example "case 0:" with #define NORTH 0 and code "case NORTH:" or use enum?
Here is a sample code using enum:
Enumerated variables have a natural partner in the switch statement, as in the following code example.
#include <stdio.h>
enum compass_direction
{
north,
east,
south,
west
};
enum compass_direction get_direction()
{
return south;
}
/* To shorten example, not using argp */
int main ()
{
enum compass_direction my_direction;
puts ("Which way are you going?");
my_direction = get_direction();
switch (my_direction)
{
case north:
puts("North? Say hello to the polar bears!");
break;
case south:
puts("South? Say hello to Tux the penguin!");
break;
case east:
puts("If you go far enough east, you'll be west!");
break;
case west:
puts("If you go far enough west, you'll be east!");
break;
}
return 0;
}
|
|
|
|
|
|
As already answered, enum is better.
One advantage is that modern compilers can throw a warning when not all enums are handled by the switch statement. So you will be noticed when you forgot to handle a newly added definition.
|
|
|
|
|
enum......
Preprocessor directives can make troubleshooting and reading your code tricky, not worth going down that route unless you don't have alternatives, in this case, you have a perfectly viable solution with an enum.
|
|
|
|
|
<pre lang="Thanks for all replies, I have implemented basic code in SAM3x8E processor - Arduino Due and it works as expected
Thanks
// test input assigment array
#define MAX_INPUT 32
int Input[MAX_INPUT] = {42, 2, 3, 4, 6, 7, 8}; // TODO assign real
volatile byte bInputStateArray[MAX_INPUT]; // state machine array
// test using enum
enum State
{
LED_on,
LED_off,
TEST
};
volatile enum State State_Array[MAX_INPUT];
do
{
switch (State_Array[iInputPin])
{
case LED_on:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_on ");
#endif
#endif
State_Array[iInputPin] = LED_off;
break;
}
case LED_off:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_off ");
#endif
#endif
State_Array[iInputPin] = TEST; // illegal case test
break;
}
case NOT_IMPLEMENTED:
{
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("case LED_off ");
#endif
#endif
State_Array[iInputPin] = TEST; // illegal case test
break;
}
default:
#ifdef DEBUG
#ifdef LCD_OUTPUT
Utility_LCD_Process("Invalid case ");
#endif
#endif
}
} while (true);
"></pre>
One more question - since the state is changed using interrupt it needs to be declared as volatile.It compiles with "volatile enum..." so it should be correct syntax, but I cannot test it as yet - the hardware is not ready, but I will try to emulate the interrupt.
AM I on the right track?
-- modified 22-Jun-15 21:55pm.
|
|
|
|
|
No you don't have to declare enum volatile, and the compiler should give you error. This because you are creating an enumerated series of constants and they can't change.
You must declare volatile your input variable, because this is the one that changes under interrupt. You must do this to advice the compiler to take care when optimizing code to make no assumptions on its value. If you don't there will be cases in which the compiler will assume that the variable is in a specific condition and will omit some code or use an old value.
|
|
|
|
|
Thanks,
the "problem" is that Arduino IDE "compiler" is so "automated" AKA stuff is hidden from user.
It did not complain when I tested this base code.
At present the input "pins" array is not volatile and I'll change that.
I have a small additional challenge implementing the ISR ( interrupt service routine).
The interrupt code involves callback and is defined as Arduino API.
The ISR does not accept any parameters , consequently each interrupt process implementation has to have its own ISR. So in theory the input ( variable ) is processed only in one place.
It is convoluted mess and I am working on it.
But I needed the basic state machine to work first.
SO far so good.
|
|
|
|