|
|
Hi, I am confused with screen/window/client coordinate in windows programming.
As I understand, 3 coordinates are like the following:
screen top left point: spoint(Xs,Ys);
window top left point: wpoint(Xw,Yw);
client top left point: cpoint(Xc,Yc);
so the related position for 3 points are:
spoint -----------
| wpoint----------
| |
| cpoint ----------
|
|
GetWindowRect(&rect) gets wpoint;
ScreenToClient(&rect) gets which point? if it gets cpoint,then
cpoint (x,y) based on wpoint, or spoint?
And in a dialog template:
CONTROL "",IDC_STATIC_TST,"Static",SS_BLACKFRAME,80,83,118,35
80,83,118,35 based on client top left point?
|
|
|
|
|
Hi,
It's all relative, my dear Watson.
econy wrote: ScreenToClient(&rect) gets which point?
ScreenToClient will convert a screet RECT (your 'spoint') coordinates to be relative to client RECT (your 'cpoint'). In your example the top and left will be negative coordinates relative to the client top/left.
Window Coordinate System[^]
Best Wishes,
-David Delaune
|
|
|
|
|
Hi
I am new to c++ and I have an assignment to write a program to compute the Average daily solar energy collectable from January 1st to December 31st per square meter (kWh/m^2/day) using a 2-axis tracking collector in Albuquerque, can any one help me with this
|
|
|
|
|
Well... get started and ask specific questions. I think you'll get more fruitful responses that way. Good luck!
|
|
|
|
|
Thanks Albert
I will attached more specific information
|
|
|
|
|
Its very simple, from an IT perspective. Your hardware, whatever it is that measures radiation, plugs into your serial/USB port, your app opens that, and takes readings on a periodic basis, collates them, and averages them.
The problem is how you distinguish solar radiation at the surface from back radiation from GH gasses. (assuming you will be measuring at the surface and not TOA).
|
|
|
|
|
i have a library, which i can not modify. this library is called from a console app, and as part of its processing, this library dumps a large and variable amount of debug/trace info to stdout. i'd like to be able to read what the library is sending to stdout from the calling code, in order to extract certain pieces of info for analysis. basically, i want to capture some of what the library prints during a call, save that info away until the call completes, then re-print the info in a better format. this would help me from having to search the hundreds of lines of output to find the particular bits i need.
(this is Windows, C++)
is such a thing possible?
|
|
|
|
|
|
i need to do this within the calling app, not from the shell.
|
|
|
|
|
Do you need this for trouble-shooting or all the time? ...because it sounded like you just needed for troubleshooting, in which case... why not use the shell options?
..but otherwise, you can always try reassigning the stream using freopen[^].
Here's an actual example...
http://support.microsoft.com/kb/58667[^]
|
|
|
|
|
Albert Holguin wrote: Do you need this for trouble-shooting or all the time
all the time.
|
|
|
|
|
Ok, try out the freopen option... hopefully that gets the job done.
|
|
|
|
|
|
Hey out there,
i'm heared that the use of timers in large projects is bad for performace.
So is this true?
I'm wondering, because i have done a new control using VC++/MFC and i use one timer in there.
The timer only is created and the first time it's called i terminate it.
Now someone said to me that i sould use ::GetTickCount() instead and save the first value and compare it again and again.
It look more performance hungry than my timer solution.
So what's your oppinion?
Please don't answer something like i should do a test, i haven't so much time to test such things.
And please answer only real facts and not only stuff like "i like timers more because i like it more" or "i like the TickCount solution more".
Hope for a good discussion
|
|
|
|
|
I think it largely depends on specific circumstances... for example, I've used timers to update fields that change at extremely high-rates. Under those circumstances, timers were actually a HUGE improvement on an old system that updated UI fields as often as new data was available.
So my opinion is... use what works for you.
|
|
|
|
|
Using anything in an inappropriate way may be bad for performance. Using it correctly shouldn't.
Your question isn't so much about timers being bad in general, but about the proper use of one. The size of the application has nothing got to do with it at all.
As for our opinion: it's impossile to tell if you're not more specific. It's ironic how you're asking for specific answers, yet fail to offer the minimal information required to answer the question properly. There are plenty of timer interfaces that you might use, you didn't specify yours.
As for GetTickCount , that is a bad way to measure performance since it only checks the system time, not the time your application uses: if other applications are running at the same time and slowing the whole system down, you'll get much higher values! If your purpose is to measure program performance, a more suitable function to use would be clock()[^]
|
|
|
|
|
Stefan_Lang wrote: Using anything in an inappropriate way may be bad for performance. Using it correctly shouldn't.
I like that...
|
|
|
|
|
I'd like too.
Using it correctly, the timer has a good performance.
Sometimes, GetTickCount is a good solution, but it wastes too many cpu time to do this.
On the other hand, Timer provide a good way to let you know when times out, and you can do anything in the period.
|
|
|
|
|
What are you talking about? Timers do not have bad performance! Timers don't waste CPU! All they do is read some hardware register. We're talking nanoseconds here!
Keep in mind that almost all timer functions can measure at a resolution of only about 20-50ms at best, even though the unit typically used for the result value is in ms. Calling them more often then 100 times a second is therefore utterly pointless and stupid. Calling them less often will not result in any performance loss that you are able to measure!
|
|
|
|
|
I am so sorry about you misunderstood my words, my English is very poor!
I agree with you about timers don't waste CPU, because it provides a way that can be waited, when times out, the callback functions is invoked.
About nanoseconds measure, clock() maybe right, but windows gives high-precision clock to us for API, it is QueryPerformanceFrequency().
But I think C3D1 just want to know which way is more performance in GetTickCount and Timer.
I think you are a good guys, Can I make friends with you ?
|
|
|
|
|
I understand that english is not your first language - no need to apologize.
As for "waiting" or "callback", I don't see the OP asking about that at all. If his question were about waiting, he should use Sleep()[^] , not timer functions.
I understand the question as one about program performance, i. e. the CPU time it uses. Since the indicated functions don't measure CPU time, their properties are irrelevant. That is why I suggested clock()[^] .
You are correct that it it possible to use higher precision timers. However, most of the time the standard timers are sufficiently accurate for fixing performance issues. That said, the OP suggested comparing times "again and again", implying constant polling - and that doesn't make sense at all! Therefore I made my initial statement about 'inappropriate use' of timers.
There is one other possible scenario that the OP may be referring to: waiting for an operation to finish, but not for longer than a predetermined amount of time. If that is the case, I'd suggest using WaitForMultipleObjects()[^] .
|
|
|
|
|
Sorry, maybe i misunderstood the OP's question.
I understand the question as the OP want to delay executing.
You are right, if he want to waiting, he should use Sleep.
Through this, I think you are a good programmer who understand OS well, nice to meet you !
There is another question, sometimes you just want to delay executing, Sleep, Timers, or GetTickCount all is right.
Sleep don't waste any CPU time. GetTickCount does it. But sometimes, free loop just like GetTickCount is a performance way to wait a predetermined a mount of time, like SpinLock.
What do you think about it.
|
|
|
|
|
No! Don't use GetTickCount if all you want to do is wait. Ever.
What it amounts to is a busy loop that eats the entire CPU time as it is repeated hundreds of millions of times per second! You're slowing other programs by doing that, because one CPU core will be completely locked by your program! If your CPU only has one core, the Computer will freeze! If your program has any GUI parts that the user interacts with, these will freeze also. A user sitting in front of the computer may think the program is crashed, and restart it.
Not to mention that a modern OS has many tasks running in the background that you are preventing from working!
Do not needlessly eat CPU time!
|
|
|
|
|
Yeah, I can't agree with you any more !
Sometimes, i should delay a short time precisely, I think free loop is useful. just like wait some uart data in real OS.
May I get your email, i just want to contact with you frequently!
|
|
|
|
|