|
Lamefif wrote: notifyData.uFlags = NIF_ICON | NIF_TIP | NIF_MESSAGE | NIF_STATE;
What if you remove NIF_STATE ?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
|
Lamefif wrote: something wrong with this line?
::strcpy((char*)notifyData.szTip, "hello");
If it's a unicode build then yes, there's something wrong with that line
I would recommend NOT using casts unless you need to. Write code without the casts. If the
compiler complains, then investigate why.
For a Unicode build, that line should be something like:
// Unicode only
::wcscpy(notifyData.szTip, L"hello");
or better yet
// generic
_tcscpy(notifyData.szTip, _T("hello"));
If this is indeed the problem, had you omitted the casts to char*, the compiler would have let
you know.
Mark
-- modified at 14:23 Monday 16th July, 2007
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
thank you mark
|
|
|
|
|
You're welcome
Also check out DavidCrow's reply...you are using a flag that indicates members of the struct are
valid but you didn't show those members being initialized.
Cheers,
Mark
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
How to make sure the LPCRITICAL_SECTION i have is a valid one ?. ie, suppose some other thread called a DeleteCriticalSection() on the same pointer then the pointer becomes invalid and the behaviour is undefined. So how to make sure this has not happend.
|
|
|
|
|
xcavin wrote: So how to make sure this has not happend
Use good coding practices!
Thread sync objects should be managed by one object/class/thread.
Of course, you're free to code any way you want, but if this is an issue there's something
wrong
Mark
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
How to make sure the LPCRITICAL_SECTION i have is valid ?
|
|
|
|
|
It's valid from the time InitializeCriticalSection() is called until DeleteCriticalSection()
is called.
It's only a structure - there's no handle or function you can use to check its validity.
It's up to you to manage its scope.
Mark
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
Mark Salsbery wrote: It's only a structure - there's no handle or function you can use to check its validity.
It's up to you to manage its scope.
From the dump using CDB i can see if its uninitialized. So was wondering if this can be done it from my program itslef so that there is no need to create dump !
|
|
|
|
|
xcavin wrote: ...see if its uninitialized.
There really shouldn't be any reason for it not to be. This falls under the realm of good programming techniques.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
DavidCrow wrote: There really shouldn't be any reason for it not to be. This falls under the realm of good programming techniques.
Sorry to be impolite
Leave the programming techniques.
My question is simple, if debugger can find it, how can it be done within my program ?
|
|
|
|
|
How does the debugger find it?
If I put the line
CRITICAL_SECTION cs;
in my code, it's uninitialized. In a debug build at runtime, the entire struct is filled with
0xCC bytes. The only way I see to check for validity is initialize the struct to some values that
can never occur when the CS is initialized.
Maybe:
CRITICAL_SECTION cs;
memset(&cs, 0xFF, sizeof(CRITICAL_SECTION));
It's hard to "Leave the programming techniques" when this shouldn't be an issue if it's used
properly.
Mark
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
imagine a multiprocessor machine. The object which has a critical section member is deleted just before another thread tries to lock it. And on the destructor of this object delete_critical_section is called. I know to have a global synchronisation object and to resolve this issue, but that sync object would cost me lot time. And would be my last choice.
|
|
|
|
|
Right. I'm following you.
No thread should be deleting the critical section, except for a thread that's in charge of the
lifetime of the critical section.
The problem here is an object shouldn't be accessible by a thread when it's being destructed or
after it's destructed.
If an object has its own CS for access that's fine.
In your scenario you also need synchronized access external from the object to control access to
the object's scope/lifetime.
Make sense?
Mark
Mark Salsbery
Microsoft MVP - Visual C++
"Great job team! Head back to base for debriefing and cocktails."
|
|
|
|
|
IMHO you are trying to solve the wrong problem.
xcavin wrote: The object which has a critical section member is deleted just before another thread tries to lock it use it.
Your problem appears to be in your management "use model" of that object. The fact that it has a critical section member is irrelevant or perhaps even a bad design on it's own.
|
|
|
|
|
led mike wrote: IMHO you are trying to solve the wrong problem.
True, I agree. But can't help, need to fix this way. Cannot afford to reduce the speed any more
|
|
|
|
|
A critical section should be regarded as an opaque data structure. See here[^] for a description of what this means. Even if you did find a way to validate it you could only do so by ignoring the opaqueness: such techniques could break in a future OS or even after applying a service pack. David and Mark are giving you sound advice and my advice is to follow it.
Steve
|
|
|
|
|
See this[^] article does any help?
|
|
|
|
|
Quite frankly the only reason to call DeleteCriticalSection is if you're containing a critical section within some dynamically allocated object. In which case, the code that deletes that object should delete the critical section.
At process exit time, why waste the time to free the critical section? Windows is only going to throw away the whole address space anyway. Indeed, why free the contents of the heap?
All DeleteCriticalSection really does is close the handle to the event object that was allocated if a thread ever had to block on entering the critical section. Windows closes handles that were still open when a process exited.
|
|
|
|
|
Mike Dimmick wrote: At process exit time, why waste the time to free the critical section?
Process should never exist. But unfortuantely another thread is hung after trying to use this deleted hence uninitialized criticalsection !
|
|
|
|
|
This is need not be the case. Here's what critical sections currently look like:
typedef struct _RTL_CRITICAL_SECTION_DEBUG {
WORD Type;
WORD CreatorBackTraceIndex;
struct _RTL_CRITICAL_SECTION *CriticalSection;
LIST_ENTRY ProcessLocksList;
DWORD EntryCount;
DWORD ContentionCount;
DWORD Spare[ 2 ];
} RTL_CRITICAL_SECTION_DEBUG, *PRTL_CRITICAL_SECTION_DEBUG, RTL_RESOURCE_DEBUG, *PRTL_RESOURCE_DEBUG;
typedef struct _RTL_CRITICAL_SECTION {
PRTL_CRITICAL_SECTION_DEBUG DebugInfo;
LONG LockCount;
LONG RecursionCount;
HANDLE OwningThread;
HANDLE LockSemaphore;
ULONG_PTR SpinCount;
} RTL_CRITICAL_SECTION, *PRTL_CRITICAL_SECTION;
typedef RTL_CRITICAL_SECTION CRITICAL_SECTION;
See here[^] for a description. In short all CRITICAL_SECTION s are linked together link-list style and doing what you're suggesting may compromise the list and is thus likely to end in tears.
As much as possible you should just follow the rules and try not to make any assumptions.
Steve
|
|
|
|
|
I have to draw some real-world coordinates on a Device Context (screen only).
I have a "view" with a size of 500x500 pixels, to be able to decide that this 500x500 pixel actually represent a 20mx20m view, I have to use CDC::SetMapMode(MM_HIMETRIC) and CDC::SetViewportOrg(aPoint) to switch the mapping and the viewport origin ?
I have created a simple sample to test this out:
void CModelView::OnPaint()
{
CPaintDC dc(this);
CRect rect;
GetClientRect(rect);
CPoint origin;
origin.x = rect.left;
origin.y = rect.bottom;
dc.SetMapMode( MM_HIMETRIC );
dc.DPtoLP(&origin);
dc.SetViewportOrg(0, rect.Height());
dc.DPtoLP(&rect);
dc.FillRect( rect, &m_Brush);
dc.Rectangle(CRect( 100, 100, 200, 200 ));
}
Some questions:
I create my CModelView with a fixed size in pixels, but if I set the size to be square, my CModelView does not look like a square, I assume I have to compensate by the resolution of the screen ?
Is there a way to make the viewport a fixed size ? MSDN says that the SetViewPortExt and SetWindowExt are ignored for MM_HIMETRIC ? How do I make my "view" behave like it's 20m x 20m ? (This is the part I have not yet figured out).
Thanks in advanced if you have links to articles (here or elsewhere), or hints and tips
Max.
|
|
|
|
|
MM_ISOTROPIC
(Ensure your calculating your square in world coordinates to ensure a square is a square and a circle is a circle. Also, MM_ISOTROPIC will force your extents to preserve aspect ratio in case you deviate off course a bit. MM_ANISOTROPIC would allow distorting the output but I've never found it of much value.
|
|
|
|
|
I am using midiOutLongMsg() to send message to H/W through MIDI port and H/W is sending the acknowledgement through same port.
I am having doubt, the WindowProc() function I am using for getting acknowlegement from H/W is missing some message. Is there chance for WindowProc() missing MIDI message? If so, how to increase the precision?
Best Regards,
Suman
|
|
|
|