|
Apologies for the shouting but this is important.
When answering a question please:
- Read the question carefully
- Understand that English isn't everyone's first language so be lenient of bad spelling and grammar
- If a question is poorly phrased then either ask for clarification, ignore it, or mark it down. Insults are not welcome
- If the question is inappropriate then click the 'vote to remove message' button
Insults, slap-downs and sarcasm aren't welcome. Let's work to help developers, not make them feel stupid.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
For those new to message boards please try to follow a few simple rules when posting your question.- Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
- Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
- Keep the subject line brief, but descriptive. eg "File Serialization problem"
- Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
- Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
- Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
- If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode HTML tags when pasting" checkbox before pasting anything inside the PRE block, and make sure "Ignore HTML tags in this message" check box is unchecked.
- Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
- Please do not post links to your question in one forum from another, unrelated forum (such as the lounge). It will be deleted.
- Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
- If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
- No advertising or soliciting.
- We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
How to get the title of CPropertyPage before create CPropertySheet:
CPropertyPage somePage;
CPropertySheet m_sheet;
TRACE(_T("Adding page '%'\n"), somePage.GetTitle());
m_sheet.AddPage(&somePage);
...
m_sheet.Create(....);
|
|
|
|
|
You cannot get title because you didn't set it.
Use overloaded CPropertyPage ctor that accepts the caption Id parameter.
If the string with this caption Id exists then it will be stored in the
PROPSHEETPAGE m_pPSP structure (in its LPTSTR m_psp.pszTitle member) that is the member of the CPropertyPage class.
Then you could do:
CString title = somePage.GetPSP().pszTitle;
|
|
|
|
|
Hi,
How can i build my x86 project in my x86 PC with VS2022 installed for ARM?.
|
|
|
|
|
|
I meant building for ARM from an Intel PC with VS2022.
modified 25-Sep-24 23:04pm.
|
|
|
|
|
I'm exploring Win32's HID API to read/write from/to various devices.
So far, I've been able to read responsive input for two wired devices -- an XBOX 360 Compliant controller and a steering wheel.
This is good, but buffers filled by reading Bluetooth HID devices with ReadFile are not updating with user input. I've been able to test two devices: an Xbox One Wireless Controller(VID=045E,PID=02E0), and an Xbox One S Controller [Bluetooth](VID=045E,PID=02FD).
What could be causing this? Are there any specific things that must be done before/while reading a Bluetooth HID device?
Some notes:
• RawInput does not seem to be usable without a window(i.e., in a pure console program), since it requires WM_INPUT messages to function.
• XInput seems to exclusively support Xbox controllers. May use it for Xbox controllers specifically(especially for trigger separation).
• Have not explored DirectInput for controllers yet. Unsure if it's useable for modern controllers(including bluetooth-based ones).
Here's the current code setup.
Code notes:
• The end goal is to read/write inputs for real-time programs(e.g., games).
• Tangentially, I'm not very experienced with Win32 and its file handling, so there's probably some issues with the general usage.
Code of interest
if(isXOneX){
HANDLE deviceHandle = CreateFile(devicePath,GENERIC_READ|GENERIC_WRITE,FILE_SHARE_READ|FILE_SHARE_WRITE,NULL,OPEN_EXISTING,FILE_FLAG_OVERLAPPED,NULL);
PHIDP_PREPARSED_DATA preparsedData;
HidD_GetPreparsedData(deviceHandle,&preparsedData);
HIDP_CAPS caps;
HidP_GetCaps(preparsedData,&caps);
HIDP_BUTTON_CAPS hidpButtonCaps[caps.NumberInputButtonCaps];
HidP_GetButtonCaps(HIDP_REPORT_TYPE::HidP_Input,&hidpButtonCaps[0],&caps.NumberInputButtonCaps,preparsedData);
DWORD reportId = hidpButtonCaps[0].ReportID;
auto inputReportBufferLength = caps.InputReportByteLength+1; BYTE inputReportBuffer[inputReportBufferLength];
while(true){
OVERLAPPED overlapped;
memset(&overlapped, 0, sizeof(overlapped));
if(ReadFileEx(deviceHandle, &inputReportBuffer[0], inputReportBufferLength, &overlapped, NULL)){
DWORD bytesTransferred;
if(GetOverlappedResult(deviceHandle, &overlapped, &bytesTransferred, FALSE)){
for(auto i=0;i<inputReportBufferLength;i++){
std::cout << std::setfill('0') << std::setw(3) << (INT)inputReportBuffer[i] << " ";
}
std::cout << std::endl;
}
}
else{
CancelIo(deviceHandle);
}
}
HidD_FreePreparsedData(preparsedData);
CloseHandle(deviceHandle);
}
free(deviceInterfaceDetailData);
Full code(without error prints)
#include <iostream>
#include <initguid.h>
#include "windows.h"
#include "setupapi.h"
#include <hidclass.h>
#include <hidsdi.h>
#include <iomanip>
int main(){
auto hDevInfo = SetupDiGetClassDevs(&GUID_DEVINTERFACE_HID,NULL,NULL,DIGCF_PRESENT|DIGCF_DEVICEINTERFACE);
if(hDevInfo != INVALID_HANDLE_VALUE){
auto success = true;
auto i = 0;
while(success){
auto deviceInterfaceData = SP_DEVICE_INTERFACE_DATA();
deviceInterfaceData.cbSize = sizeof(SP_DEVICE_INTERFACE_DATA);
success = SetupDiEnumDeviceInterfaces(hDevInfo,NULL,&GUID_DEVINTERFACE_HID,i,&deviceInterfaceData);
if(success){
SP_DEVICE_INTERFACE_DETAIL_DATA* deviceInterfaceDetailData = NULL;
DWORD requiredSize = 0;
auto detailSizeSuccess = SetupDiGetDeviceInterfaceDetail(hDevInfo,&deviceInterfaceData,NULL,0,&requiredSize,NULL);
deviceInterfaceDetailData = (SP_DEVICE_INTERFACE_DETAIL_DATA*)calloc(requiredSize,sizeof(byte));
deviceInterfaceDetailData->cbSize = sizeof(SP_DEVICE_INTERFACE_DETAIL_DATA);
auto devInfoData = SP_DEVINFO_DATA();
devInfoData.cbSize = sizeof(SP_DEVINFO_DATA);
deviceInterfaceDetailData->cbSize = sizeof(SP_DEVICE_INTERFACE_DETAIL_DATA);
auto detailSuccess = SetupDiGetDeviceInterfaceDetail(hDevInfo,&deviceInterfaceData,deviceInterfaceDetailData,requiredSize,&requiredSize,NULL);
auto devicePath = deviceInterfaceDetailData->DevicePath;
auto isX360 = strstr(devicePath,"vid_045e&pid_028e");
auto isWheel = strstr(devicePath,"vid_044f&pid_b655");
auto isXOneX = strstr(devicePath,"vid_045e&pid_02e0"); auto isXOne = strstr(devicePath,"vid_045e&pid_02d1");
auto isXOneS = strstr(devicePath,"vid_045e&pid_02fd"); if(isXOneX){
HANDLE deviceHandle = CreateFile(devicePath,GENERIC_READ|GENERIC_WRITE,FILE_SHARE_READ|FILE_SHARE_WRITE,NULL,OPEN_EXISTING,FILE_FLAG_OVERLAPPED,NULL);
PHIDP_PREPARSED_DATA preparsedData;
HidD_GetPreparsedData(deviceHandle,&preparsedData);
HIDP_CAPS caps;
HidP_GetCaps(preparsedData,&caps);
HIDP_BUTTON_CAPS hidpButtonCaps[caps.NumberInputButtonCaps];
HidP_GetButtonCaps(HIDP_REPORT_TYPE::HidP_Input,&hidpButtonCaps[0],&caps.NumberInputButtonCaps,preparsedData);
DWORD reportId = hidpButtonCaps[0].ReportID;
auto inputReportBufferLength = caps.InputReportByteLength+1; BYTE inputReportBuffer[inputReportBufferLength];
while(true){
OVERLAPPED overlapped;
memset(&overlapped, 0, sizeof(overlapped));
if(ReadFileEx(deviceHandle, &inputReportBuffer[0], inputReportBufferLength, &overlapped, NULL)){
DWORD bytesTransferred;
if(GetOverlappedResult(deviceHandle, &overlapped, &bytesTransferred, FALSE)){
for(auto i=0;i<inputReportBufferLength;i++){
std::cout << std::setfill('0') << std::setw(3) << (INT)inputReportBuffer[i] << " ";
}
std::cout << std::endl;
}
}
else{
CancelIo(deviceHandle);
}
}
HidD_FreePreparsedData(preparsedData);
CloseHandle(deviceHandle);
}
free(deviceInterfaceDetailData);
i++;
}
std::cout << "No more to enumerate." << std::endl;
}
}
SetupDiDestroyDeviceInfoList(hDevInfo);
return 0;
}
Error couts were omitted due to readability and code size(since this is quite huge already), and the post preview doesn't show a scroll bar. Will post the as-is full code with error prints if needed.
modified 22-Sep-24 10:13am.
|
|
|
|
|
I am facing this odd issue in which GUI freezes randomly on button clicked
So basically This is the sample code:
void dialog::OnButton1Clicked()
{
m_threadCmd = Button1; //Button1 is an enum from commandEnumList
AfxBeginThread(ui_ThreadExecuteCmd, this);
}
UINT dialog::ui_ThreadExecuteCmd()
{
OWaitCursor waitCursor;
switch (this->m_threadCmd)
{
case BUTTON1:
if (!Func1())
{
ShowError();
}
break;
}
this->m_threadCmd = 0;
return 0;
}
bool Func1()
{
//Notify GUI via PostMessage to disable the controls
//Do some processing
//Notify GUI via PostMessage to enable the controls
//return errorcode
}
The weird thing is that sometime it works fine and sometimes the GUI freezes.
I am seeing the UI buttons getting disabled then performing the operation -then ui buttons getting enabled back again. The Gui freeze happens randomly after that and when it freezes the buttons become unclickable and i am not receiving any message in PreTranslateMsg() after the freeze for mouse click or anything else
modified 21-Sep-24 5:10am.
|
|
|
|
|
Is it possible that you're performing a lengthy operation on the user interface thread, and that's why it's freezing?
From the "code" that you posted, it looks like that's exactly what you're doing.
If you do anything non-trivial when the button is clicked, then you should assign the task to a worker thread so that the UI thread is free to process UI inputs.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Oddly, they post different 'code' in QA.
GUI freezes randomly /MFC/C++[^]
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
Hi @jeron1 , Thanks for pointing it out. Updated the question there as well .
|
|
|
|
|
Thanks for the reply. It's not a lenghty operation (Hardly takes 1 sec) Also,I am doing that only assigning the task to a worker thread. I have updated the code flow in the question , It will give you the exact idea about how am i doing it.
|
|
|
|
|
You're lucky it works at all. The sample code you posted posts messages to disable the controls, but those message should never get processes because the UI thread is still busy with "Do some logical processing," so it never processes those messages to disable the controls.
The processing of the messages will stay queued up until the UI thread is done with your "Do some logical processing" and returns after posting the messages to re-enable the controls. Once the UI thread is back in the "idle state", meaning your back in the message pump code picking up messages and dispatching them, only then will the messages you posted get processed. So that's where your app is "freezing".
Like Richard said, if you've got long-running processing going on, move that processing to a task or background thread. You'll also have to rewrite your code to wait for the task to complete before posting the messages to re-enable the controls.
The goal is to keep the UI thread available and running in the message pump for as much as possible.
|
|
|
|
|
Zishan Ud787 wrote: case BUTTON1: This case is not ever going to execute because it's all uppercase (and what's getting assigned to m_threadCmd is not). Is that intentional?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
Unicode is 1byte per character, that’s the Latin characters and the other symbols found on a standard keyboard
Multibyte is Latin, Greek, Russian and everything else that exceeds the initial 256 symbols
Is that how it works?
|
|
|
|
|
|
I thought a c++ char has the size of one byte. How can something that is greater that 1 byte (Unicode, Multibyte) fit into a char?
|
|
|
|
|
It cannot; it is using an “encoding”, the most popular by far being UTF-8[^].
Mircea
|
|
|
|
|
Mircea Neacsu wrote: the most popular by far being UTF-8[^]. I'd say that the most popular file storage format is UTF-8.
As a working format, in RAM, UTF-16 is very common. E.g. it is the format used by all Windows APIs, which is more or less to say all Windows programs. Java uses UTF-16 in RAM, as do a lot of other modern languages.
It must be said that not all software that claims to use UTF-16 fully handles UTF-16 - only the BMP ("Basic Multilingual Plane"), so that all supported characters will fit in one 16-bit code unit. BMP didn't have space for latecomer alphabets, like for a number of African or Asian languages. Most developers said "But my program isn't aimed at such markets, so I'll ignore the UTF-16 surrogates, for handling such characters as two 16-bit code units. I can treat text as if all characters are of equal width, 16 bits".
But a new situation has arisen: Emojis have procreated to a number far exceeding the number of WinDings. They do not all fit in BMP, so a number of them have been allocated in other planes than BMP. Don't expect the end user to know which emojis are defined in which planes and refrain from using non-BMP emojis! If you are not prepared for them, your code may mess up the text badly.
Writing your own complete UTF-16 interpreter is not recommended. Use library functions! There is more to UTF-16 than just alternative planes: Some character codes are nonspacing, or combining (typically an accent and a character). So you cannot deduce the number of print positions from the length of the UTF-16 string - not even after considering control characters.
For "trivial" strings limited to Western alphabets, there usually is a fairly close correspondence between the number of UTF-16 code units and the number of positions. You can pretend that it is exact, but look out for cases that need to be treated as exceptions. I suspect that is what a lot of programmers do. 99,9% of Western text is free of exceptional cases, so the fixed-code-width assumption holds. Until, of course, emojis become common e.g. in file names. Note that UTF-32 does not provide an ultimate solution to all problems: You still may have to relate to nonspacing or combining characters!
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I have to confess that I am a convert to the UTF-8 religion as preached in the UTF-8 Everywhere[^] manifesto. So much so that I've written a series of articles[^] on CP about using UTF-8 in Windows (you can find the whole series here[^]).
Some of your assertions are open to interpretations: Quote: So you cannot deduce the number of print positions from the length of the UTF-16 string - not even after considering control characters. Why would that be interesting from a programming point of view? From a typographical point of view, sure, but as programmers we don't usually concern ourselves with such minutia
The subject of emojis is another pet peeve of mine so allow me a bit of a roundabout. Some evolutionary solutions have been reinvented many times: flight has been reinvented by insects, birds, mammals, you name it. However there are some crucial points in evolution that happened only once. Photosynthesis or eukaryotic cells are prime examples but so is alphabetic writing. Moving from pictographic writing, where a symbol represented a whole word, to one where a symbol represented a sound, was a magnificent achievement of the human spirit that opened the path to what we now call the Western civilization. Now, if you buy at least some of my arguments, you can see how disappointed I am when this whole evolutionary path is turned back by the spread of emojis. No longer we need the magic words of a Shakespearean sonnet when we can just put a heart and a smiley face. Bleah!
Mircea
|
|
|
|
|
Mircea Neacsu wrote: Quote:So you cannot deduce the number of print positions from the length of the UTF-16 string - not even after considering control characters.
Why would that be interesting from a programming point of view? If you code anything that is to be presented to a user, you will frequently have to relate to the physical space available, whether a 16 char single line display on an embedded device, or a field in a form on a desktop PC.
If you just send it the entire string, leaving to the display unit to discard what won't fit, for one: You may upset the display device. Second: Maybe it is obvious to you that the first 'n' characters are displayed, but don't trust it: Many small-display devices display a rolling text, so the last 'n' characters are displayed. In either case, your customer may be less than satisfied with your solution. If you present floating point values with the number of decimal positions less than the internal precision (which is almost always the case), you may want to consider rounding the last displayed digit - don't expect a pure UI module to have any concept of floating point rounding! (Besides, it may want the values as separate digits, not as an FP value.)
Even if a value is not presented to a human user, it may be exchanged with another software module in textual format. The receiver may provide a limited size text buffer, or may require a minimum number of (valid) characters (possibly converted to 7-bit ASCII with zero parity, if it is an old *nix application!)
If your software has nothing at all to do with a user interface, you may still be handling data that you handle over to some software doing the UI. This software may put restrictions on the lengths of both prompt strings and data values. You may have to make decisions about what to display, either by some form of abbreviation (Initial only, ellipsis, ...), leaving (semi-)optional parts out, etc.
I certainly can think of specific programming tasks that are completely unrelated to character string length. But to me, those are special cases. The main rule is that the printable length, both in number of positions and the typographical length (when using variable width fonts) can be essential, and you should be prepared to handle it. You ask a Unicode handling library function for the number of positions when you need it. You ask a UI typography library function for the typographical length if that is what you need e.g. to shorten the string to fit into a field.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
My remark was made mostly tongue-in-cheek (hence the smiley after it). Of course the length of the rendered text is of interest in many/most applications. It's just that, luckily, I don't have to worry about it because people who write the nitty-gritty of UI have taken care of it. For instance, in Windows, I can just call GetTextExtentPoint32[^] function to have the text measured.
However this has noting to do with UTF-8 vs UTF-16. I remain of the opinion that UTF-16 has no particular advantage when compared with UTF-8. (If there are other readers of this conversation, please don't start a flame war now - this is just a personal opinion). I see UTF-16 as a stepping stone when computing world needed to move away from ASCII, but in this day and age, it has served its purpose and we can move away to something better.
Mircea
|
|
|
|
|
Mircea Neacsu wrote: However this has noting to do with UTF-8 vs UTF-16 That is certainly true.
Mircea Neacsu wrote: I remain of the opinion that UTF-16 has no particular advantage when compared with UTF-8. I am leaning towards agreeing with you.
Mostly, I am observing - and has been observing for 40+ years - that people strive for non-Einstein solutions, "Make it as simple as possible, but no simpler". People want to do it simpler! For years, I heard lots of people say that 32 bits is overkill, Unicode will never grow beyond the first plane, BMP - there isn't anywhere close to 65,536 different characters! And for a number of years, they were right: Unicode did manage with the basic plane only.
That is when people started using 16 bit characters, although I am not sure that the name UTF-16 was know that early. With BMP only, most simple(r than possible) developers thought it quite simple; a string of 16-bit characters was just like a string of 8-bit characters, only with more characters. (Look at the History section of Wikipedia: Unicode[^] - even the initial developers of Unicode argued the same!)
If it had ended up that way, it would have been significantly simpler: You can count the number of characters as easily in 16 bit as in 8 bit character code. You can index character 23 by string8[23] or string16[23]. In other words: I can fully understand why Windows NT (1993) and Java (1995) went for 16 bit characters. (At the time of Windows NT release, UTF-8 had been proposed, but was not yet accepted as a standard - anyway, you don't change the system character encoding from 16 bits fixed to n*8 bits a few weeks before the release of a new OS!)
As we all know now, the solution was simpler than possible. Several of my coworkers were highly surprised when BMP overflowed, but didn't worry: We are never going to encounter those characters in the entire lifetime of our software! I think that they for at least ten more years continued to access character 23 by string16[23]. I can understand them. Until we got emojis in other planes, they were essentially right.
But it was a too simple solution. When you were forced to handle multiple planes, and maybe you at the same time discovered combining and non-spacing codes, then the simplicity disappeared. You have all the same issues with UTF-8; it is not any worse with UTF-16, and in Western text, the special cases occur rarely. Most of the time, UTF-16 is more straightforward, but you have to be prepared for the exceptions. With UTF-8, you can never relax; you handle variable length characters all the time! (At least if you regularly write non-English text, which is the common case in most European countries.)
If UTF-8 didn't exist, I would be happy with sending UTF-16 memory strings straight to file. Having UTF-8 as an alternative in-memory format creates trouble; I want one single unambiguous string format. Now that Windows, Java and C# both use UTF-16, I am not going to start using UTF-8 in-memory.
But I also want to have one singe unambiguous file format. UTF-8 is established, UTF-16 is not. So UTF-8 wins. I am stressing: Don't waste your time trying to process UTF-xxx yourself; use library functions. So when I read text from or write text to file, I let library functions process the strings. Each format has its use.
After all, I guess I really disagree with you: If we start with a tabula rasa, but we are to select The One And Only Encoding, UTF-16 and UTF-8 are equally good. But that isn't the situation in memory: Windows and numerous other essential tools/subsystems have based themselves on UTF-16. Given that, using UTF-8 in my application strings would introduce a lot of complexities. So accepting the realities of life, my programs will continue to use UTF-16 strings.
Until, of course, I start working with an OS having UTF-8 as its system string encoding and languages/tools that use UTF-8 as their in-memory string encoding.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
trønderen wrote: After all, I guess I really disagree with you World would be too boring if we wouldn't have different opinions
trønderen wrote: I want one single unambiguous string format. You aren't going to get it, or at least not in this lifetime . If you go to Linux or Mac worlds, everything is UTF-8. In Windows world it's UTF-16 with a sprinkle of UTF-8.
trønderen wrote: But I also want to have one singe unambiguous file format. UTF-8 is established, UTF-16 is not. So UTF-8 wins. If I understand you correctly, you suggest having UTF-8 files converted to UTF-16 on entry, processed as UTF-16 inside the application and converted back to UTF-8 on output. That would complicate things very much if you target different OS-es. It would also be inefficient if your app doesn't require the UTF-16 parts of the OS (ReadFile and WriteFile functions in Windows work with any encoding).
My strategy is almost a mirror image of that: Everything is UTF-8 until it needs to call certain OS functions when a thin wrapper converts all inputs to UTF-16 and all results back to UTF-8.
Mircea
|
|
|
|
|