|
use in SetWizardButtons- PSWIZB_DISABLEDFINISH
You need to google first, if you have "It's urgent please" mentioned in your question.
_AnShUmAn_
|
|
|
|
|
Hi,
I am using one of the registy class given in CP.
The keys are defined in .h file like:
enum Keys
{
classesRoot = HKEY_CLASSES_ROOT,
currentUser = HKEY_CURRENT_USER,
localMachine = HKEY_LOCAL_MACHINE,
currentConfig = HKEY_CURRENT_CONFIG,
users = HKEY_USERS,
performanceData = HKEY_PERFORMANCE_DATA,
dynData = HKEY_DYN_DATA
};
When I compile this in VC6.0 it compiles with no errors.
But when I compile the same in visual studio 2003 it gives error like:" Constant expression is not intergral"
If i cast like: "classesRoot=(int)HKEY_CLASSES_ROOT", it throws warning such as "pointer truncation from HEKY to int".
How can I avoid the error.Please suggest.
Regards,
Sunil Kumar
|
|
|
|
|
Try converting to ULONG_PTR or maybe LONG_PTR instead of int , althorough making an enum out of HKEYs seems to be a tiny-bit ugly to me, but this might just be my personal taste. Good luck.
> The problem with computers is that they do what you tell them to do and not what you want them to do. <
> Life: great graphics, but the gameplay sux. <
|
|
|
|
|
Thanks mate. (LONG_PTR) works.
Regards,
Sunil Kumar
|
|
|
|
|
hi all,
i am using wcstombs function to convert my const wchar* value to char*.... but its showing ?? marks instead of unicode characters..
i tried using widechartomultibyte like this... [(WideCharToMultiByte(CP_ACP, 0, Text, 0, Chartext, nSize, NULL, NULL); where int nSize =0;] but its also not converting the value....
can anyone help me where am i going wrong...
Thanks,
Rakesh.
|
|
|
|
|
Which character set are you trying to convert from?
and a wchar is unicode char is ansi
|
|
|
|
|
unicode characters..
i use chinese,japanese,arabic,french..
thanks,
rakesh
|
|
|
|
|
Hm, just looking at your code line and reading this in the documentation "If cchWideChar is set to 0, the function fails." I´d say use -1 as your cchWideChar (fourth parameter).
Souldrift
|
|
|
|
|
actually i tried with -1 first... it dint work..hence i tried with 0...
Thanks,
Rakesh
|
|
|
|
|
Could you post your actual piece of code?
|
|
|
|
|
Hi,
This is my piece of code..
const WCHAR* pText = "hello";
char * pCharText;
WideCharToMultiByte(CP_ACP, 0, pText, -1,(LPSTR) pCharText, nSize, NULL, NULL);
Thanks,
Rakesh.
|
|
|
|
|
That cannot be all your code. What´s nSize? And pCharText wasn´t initialized -> this should be a runtime error.
Anyway, try this
int erg=WideCharToMultiByte(CP_ACP, 0, pText, -1, NULL, 0, 0, 0);
char* result = new char[erg];
erg=WideCharToMultiByte(CP_ACP, 0, pText, -1, result, erg, 0, 0);
Souldrift
|
|
|
|
|
Hi SoulDrift,
I tried your code..its still showing the same "??" marks rather japanese texts..
please tell me where am going wrong..
am giving the code once again for your perusal..
const WCHAR* = L"sss";
int erg=WideCharToMultiByte(CP_ACP, 0, pText, -1, NULL, 0, 0, 0); // first we ask for the memory needed
char* charText = new char[erg];
erg = WideCharToMultiByte(CP_ACP, 0, pText, -1, charText, erg, 0, 0); // then we convert
Thanks,
Rakesh
|
|
|
|
|
Your const WCHAR* doesn`t have a label. Does your compiler not have a problem with that?
I tried your code
const WCHAR* pText = L"sss";
int erg=WideCharToMultiByte(CP_ACP, 0, pText, -1, NULL, 0, 0, 0);
char* charText = new char[erg];
erg = WideCharToMultiByte(CP_ACP, 0, pText, -1, charText, erg, 0, 0);
I works very nicely.
Souldrift
|
|
|
|
|
If the wide char character cannot be represented in the choosen codepage (in you case CP_ACP , i.e. The system default Windows ANSI codepage), then it is replaced by a default one, see WideCharToMultiByte documentation [^].
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
I have to agree, if your developing on an American/english installed OS then CP_ACP will be Windows 1252, you want to change that to the Japanese/country specific page.
? is I remember right is 0x20, could be wrong.
I wrote a little app to convert UNICODE to Multibyte supporting the codepages we need, and if the wrong code page was chosen the character would be displayed as ??
basically I replaced CP_ACP with either 1250 or 1251 or 1252 etc..
Seemed to do the trick.
|
|
|
|
|
Rakesh5 wrote: ...but its showing ?? marks instead of unicode characters..
What is?
"Old age is like a bank account. You withdraw later in life what you have deposited along the way." - Unknown
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
|
|
|
|
|
First of all, using CP_ACP means you don't really have a control of the target code page - it depends on the users's settings. Sometimes it is exactly what you want, sometimes it is not.
Anyway, if we assume that your system locale is Windows CP 1252 and you have some Greek characters in Text , your code will try to convert Greek characters to CP1252 and because there is no mapping between the two scripts, you are going to get replacement characters (?) instead.
To convert Greek text from const wchar* value to char value to char* , you'll need to use CP1253 (not sure if I spelled the constant correctly) instead of CP_AP.
I hope it make sense.
|
|
|
|
|
This question is related to my previous question. As i mentioned before, my project contains many dlls and i am facing some ambiguity due to two dlls which contains same namespace names. So, i want to change the paths of these to dlls. But i dont know how to. Is their any option in visual studio c++ so that i can easily solve my problem?
http://nnhamane.googlepages.com/
|
|
|
|
|
LoadLibrary allows you to specify the full path to the dll.
There is sufficient light for those who desire to see, and there is sufficient darkness for those of a contrary disposition.
Blaise Pascal
|
|
|
|
|
I searched on net more about loadlibrary function but i haven't got anything. can you please tell me about how to use it in windows form?
http://nnhamane.googlepages.com/
|
|
|
|
|
|
Dear All,
I am having problem in converting ANSI to UNICODE but my code is not giving the desired result. Please have a look at the below code and advice me. When I compare the both strings, the strings are not equal after conversion. Please help me. Many thanks in advance.
char szData[2] ;
szData[0] = '♠';
szData[1] = 0;
wchar_t wszData1[] = L"♠";
wchar_t wszData2[2];
MultiByteToWideChar(CP_ACP, 0, szData, -1, wszData2, sizeof(wszData2) / sizeof wchar_t));
if (wcscmp(wszData1, wszData2) == 0)
MessageBox("Strings are equal!");
else
MessageBox("Strings are not equal.");
|
|
|
|
|
bhanu_reddy09 wrote: szData[0] = '♠';
How could it work?
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
szData[0] = '♠';
wchar_t wszData1[] = L"♠"; These probably aren't doing what you're expecting. You should only have ASCII characters in source code, and use hex escapes for any characters outside the ASCII range. Depending on what that character is, and what your system's code page is, it may not even be possible to store the character in a non-Unicode string.
--Mike--
Dunder-Mifflin, this is Pam.
|
|
|
|
|