|
Thanks for the response. The tool bar is declared/defined in the class of the main window of the
application. Here is how it is defined:
CToolBar toolBar;
I am wondering if the OnPaint routine of the main window needs to do something special to get
the tool bar drawn.
Bob
|
|
|
|
|
You should not have to do anything special in the OnPaint.
Give this a try:
if(!toolBar.Create(
NULL,
this,
IDR_TOOLBAR,
WS_CHILD | WS_VISIBLE | CBRS_TOP | CBRS_FLYBY | CBRS_SIZE_DYNAMIC
| CBRS_TOOLTIPS
| CBRS_HIDE_INPLACE | CBRS_GRIPPER
) ||
!toolBar.LoadToolBar( IDR_TOOLBAR )
)
{
TRACE0("Failed to create toolbar\n");
return -1;
}
toolBar.SetButtonStyle( 0, TBBS_CHECKBOX );
toolBar.SetButtonStyle( 1, TBBS_CHECKBOX );
toolBar.SetButtonStyle( 2, TBBS_CHECKBOX );
toolBar.SetButtonStyle( 3, TBBS_CHECKBOX );
toolBar.SetButtonStyle( 4, TBBS_CHECKBOX );
toolBar.SetButtonStyle( 5, TBBS_CHECKBOX );
const UINT idArray[] = {
IDM_LINES, IDM_RECTANGLES, IDM_ELLIPSES,
IDM_ENLARGE, IDM_ORG, IDM_RESET
};
BOOL status3 = toolBar.SetButtons( idArray, 6 );
toolBar.UpdateWindow();
BOOL status4 = toolBar.ShowWindow( SW_SHOW );
toolBar.Invalidate();
this->Invalidate();
|
|
|
|
|
Thank you for the response. I tried your code and I found that it did not compile due
to the fact that it calls Create with four arguments and Create takes at more three. I
do not understand what the purpose of the first argument (NULL) is. Therefore, I took
the NULL argument to Create. I left the other three (this, IDR_TOOLBAR and the flags)
in. After doing so, the code compiled but when run, it did not produce a tool bar.
I am thinking that problem might be related to my resource file. Below is the relevant part
of my resource file:
IDR_TOOLBAR BITMAP "TOOLBAR.BMP"
IDR_TOOLBAR TOOLBAR 16, 15
BEGIN
BUTTON IDM_LINES
BUTTON IDM_RECTANGLES
BUTTON IDM_ELLIPSES
SEPARATOR
BUTTON IDM_SHOWTB
BUTTON IDM_EXIT
END
Observe that the name of the bit map and the name of the tool bar is the same. I believe
it is suppose to be that way, right?
Any other ideas?
Bob
modified on Monday, June 22, 2009 5:30 PM
|
|
|
|
|
I am attempting to create a template that has a constructor that takes an array of items of type T, declared as follows:
template <typename T>
class foo {
public:
foo(T* items) {
}
}; Unfortunately, this code will not compile if T is a reference type, because you cannot have pointers to references. Therefore, I would like to create a template specialization that only declares this method if the template parameter T is NOT a reference variable. If it is a reference variable, I want the same method signature but I would like it to be an array of non-reference types to the class. As an example, if I knew that the template parameter base type was 'int', I could create the following specializations:
template <int>
class foo {
public:
foo(int* items);
};
template <int&>
class foo {
public:
foo(int* items);
};
template <int*>
class foo {
public:
foo(int** items);
};
template <(int& )*>
class foo {
public:
foo(int** items);
}; However, I don't know what base type the template parameter is, so I would like to do the above specialization for any base type (for example, I used int, but I could have used string, unsigned, char, or any other class). Is there any way to do this? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Does this meet your needs? It compiles (and runs OK) with gcc 4.0.1, should compile with Visual C++ 7.1 and above, I believe. The Boost type traits[^] do the template specialisations for you, in the derivation of the constructor parameter type.
#include <boost/type_traits.hpp>
#include <iostream>
template <typename T>
class foo {
public:
foo(typename boost::add_pointer<typename boost::remove_reference<T>::type >::type items)
{
}
};
int main()
{
int aa;
foo<int> a(&aa);
foo<int&> b(&aa);
}
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
This is great advice... I think I am going to hold off on it for now and just delete those constructors so I don't have to install a third-party package, but I will keep this in mind for future work as this appears to be very useful information.
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
That part of Boost wouldn't need much installation - the type-traits are a header-only library, so don't need anything to be built.
Alternatively, with a decent standard compliant compiler (like VC++ 7.1 and later), those type traits are easy enough to write, especially for the case you have:
template<class T>
struct add_pointer
{
typedef T* type;
};
template<class T>
struct add_pointer<T&>
{
typedef T* type;
};
I've just built and run that with VS2008 (admittedly only with simple types like int) and it seems fine.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Have you looked at partial template specialization? You could make a partial specialization for reference types and then a general template for the rest of them.
|
|
|
|
|
Do you know the syntax to do such a thing? That is exactly what I am attempting to do, but have no idea as to the syntax. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
From the top of my head (untested) something like:
template <typename T>
class foo<T&>
{
...
};
|
|
|
|
|
Hi All
I'm developing my own CString class. With the idea to make it "much" faster than the standard CString, easily upgradeable, and to include into it some non-standard parsing functions I often use in my work, those will be much better to be part of the CFString object itself, instead of being separate functions.
And in deed I have some noticeable achievements for the basic functions my CFString uses:
My copy() is 35% faster then strcpy()!
My lengh() is 30% faster than strlen().
BUT note that the optimizer often pre-calculates strlen() calls and replaces them with respective numbers!
For example int i = strlen("abc") will eventually generate mov EAX, 3 instaed of generating the asm code of strlen() itself, that happens for hardcoded strings! so in the cases when strlen() isn't replaced by a number, my function is 30% faster
My compare() is 20% faster than strcmp().
I used the inline assembly option of the Visual C++ compiler, the "__asm" command to achieve this.
I also developed an equalizing mechanism, so when I write str1 = str2 the "operator =" actually doesn't copy the data from str2 to str1
the copy occurs ONLY in specific cases when it's needed.
And that mechanism provides about 35 - 40% faster equalizing betweend CFString objects in comparison to the CString equalizing!
So far so good! But in the case with FindOneOf the standard CString beated me badly! CString::FindOneOf is almost twise faster than my CFString::FindOneOf, and my FindOneOf IS written in assembly too. I did everithing I could to tweak it up, but I only reduced the time from 33000ms to 28000ms and CString::FindOneOf does the job for 18000ms, and my function suppose to be faster, now I'll be happy just to equal the score of CString::FindOneOf!
This is the test code I use:
CString/CFString str1("abcdefghijklmnoprs0987654321");
int start = GetTickCount();
for(int pos = 0; pos; 100000000; pos++)
{
index = 0;
index = str1.FindOneOf("1234567890");
}
int time = GetTickCount() - start;
CString strTime;
strTime.Format("%d, %d", time, index);
m_editResult.SetWindowText(strTime);
I looked at the disassembly of the CString::FindOneOf but couldn't find a single loop, I didn't understan anything! there are a lot of stack operations (PUSH POP) a lot of CALLs and they suppose to be slow, I don't have any idea how can this function beat mine!!!
This is the algorithm I use written in C, although it's written in assembly and I believe optimised in the actual function:
const char* pstrTBuffer = m_pstrBuffer;
const char* pstrTSeek = pstrSeek;
do
{
while(*pstrTSeek)
{
if(*pstrTBuffer == *pstrTSeek++)
{
return pstrTBuffer - m_pstrBuffer;
}
}
pstrTSeek = pstrSeek;
}
while(*++pstrTBuffer);
return -1;
In case the code isn't clear enough:
The idea is as simple as possible, compare each character in the main string m_pstrBuffer with each character in the charset pstrSeek
if there is a match return its index if not return -1
I assumed that the problem isn't in my coding technique, it must be generally in the algorithm I use!!!
Does somebody know a better algorithm to implement FindOneOf???
I will appreciate any help - thank you!!!
Sorry for the prolonged question, but I wanted to be as clear as I can in my description.
modified on Monday, June 22, 2009 10:20 AM
|
|
|
|
|
Member 4399131 wrote: I assumed that the problem isn't in my coding technique, it must be generally in the algorithm I use!!!
Yep.
Member 4399131 wrote: Does somebody know a better algorithm to implement FindOneOf???
Looking at their code (strpbrk - actually implemented using strspn), Microsoft do
They build a table (it's only 256 bits == 32 bytes) from the set of characters you're looking for and do a table lookup for each character of the string you're searching in. That means that their algorithm is O(n) (n=search string length) rather than O(n*m) (n=search string length, m=number of chars you're looking for).
Oh, and they implement that in assembly language (different implementations for x86 and x64). If I were you, I'd do myself a favour and use the MS C run-time function in this case...
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Check this out
It seems the the test code wasn't appropriate
CString str2[] = { "1234567890", "qwtyupzxghiocervbasdjklfnm", "qwo123ertyui45", "qwzxy1uldfghjk2bnm3ertcviopas" };
int index = 0;
CFString str1("abcdefghijklmnoprstuzxwvqyqwertyuio0987654321");
int i = 0;
int start = GetTickCount();
for(int pos = 0; pos < 100000000; pos++)
{
index = 0;
index = str1.FindOneOf(str2[pos % 4]);
}
int time = GetTickCount() - start;
CString strTime;
strTime.Format("%d, %d", time, index);
m_editResult.SetWindowText(strTime);
In this eaxmple which includes a number of different charsets my function finishes for 19000ms and CString::FinOneOf for 25450ms which is 6468ms slower, in other words my function is 25% faster. Do you think that this test code is more accurate?
And about the MS Lookup table I look at it too and I wonder if that's quite legal, I mean that according to my assembly book the instruction BTS sets a bit in the target pointed by its position BUT MAX POSITION IS 31!!! in this case they set bits in a larger memory block by setting positions up to 255 - I wonder how this works, it does obviously?! Maybe my assembly book is obsolete.
|
|
|
|
|
Member 4399131 wrote: It seems the the test code wasn't appropriate
Member 4399131 wrote: Do you think that this test code is more accurate?
The only really accurate test set would exercise all possible inputs...which is obviously infeasible
What's more important is to understand why your code is quicker with some test sets and not with others and to determine what the optimal approach actually is (for example, if you can work out roughly where your code becomes more efficient than Microsoft's, you could switch between the two implementations dynamically).
Member 4399131 wrote: And about the MS Lookup table I look at it too and I wonder if that's quite legal, I mean that according to my assembly book the instruction BTS sets a bit in the target pointed by its position BUT MAX POSITION IS 31!!! in this case they set bits in a larger memory block by setting positions up to 255 - I wonder how this works, it does obviously?! Maybe my assembly book is obsolete
My x86 reference (pukka Intel reference manual) says that if you use an immediate operand, you're limited to 0-31. If you use a register operand, the limits disappear (well, they're -2^31 to 2^31-1, which is near as damn it unlimited)
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
|
Member 4399131 wrote: I seeeee! this detail wasn't written in my source - that's a reason to find some more complete one.
I got mine from Intel[^] - downloadable PDFs these days. When I got them (close to 10 years ago!) you could order free paper manuals and they'd dispatch them to you, again free. I guess the cost of that was an easy target when reducing costs, as paper copies of the manuals now cost money (and lots of it!).
I see (by downloading the PDFs) that BTS actually gives you offsets of up to +/- 2^63 bits on the IA-64 (== x64) platform. Should be enough, even for Unicode lookup tables
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Wow! thank you man! for some time I was wondering where I can get such a detailed description of the architecture and each instruction(I didn't even know how to search for it), and not thanks to you I finaly have it!!! - MY REGARDS
Bay the way I didn't bother myself ordering it - I've downloaded it directly in PDFs
|
|
|
|
|
I did it!
I used the lookup table algorithm implemented in my own coding technique (which is nothing special just few tweaks). And now I have FindOneOf that is 15 - 25% faster than CString::FindOneOf. I did many tests and it doesen't drop bellow 15%
I couldn't make it without your help! THANK YOU!!
It would be even faster but BT, BTS as well as SHR, SHL, ROR, ROL (and others) instructions are generally slower than MOV, ADD and so on ... instructions!
I noticed that in my not very long assembly practice. I think that is because they are all bitwise operations.
|
|
|
|
|
Without knowing the logic behind the standard implementation, it is impossible to know why your algorithm is being beat. Perhaps the standard string stores data about specific searches so that, in your loop, it loads known results instead of recalculating them (ie, you are searching the same string for the same charset, so why not cache the result?). Perhaps it realized that your set of input characters is consecutive, and instead of iterating through them it tests each character against a range. Perhaps the standard implementation starts from the end of the string and searches forward. Perhaps the compiler realized that the standard call to FindOneOf in your loop could be optimized away since you are not using any values of that method call except the final one, but your inline-assembly could not be optimized away. The moral is, without doing more testing I have no idea why your algorithm is slower (or your test method is inappropriate). I would only recommend additional test cases, or to make one or more of the modifications that I said were possibilities of what the standard code does. You could also try reordering the loops, but I doubt you are going to see significant timing differences resulting from such a change.
[BEGIN EDIT]
The lookup table idea is a great one for ASCII strings... I was only thinking in terms of unicode.
[END EDIT]
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Skippums wrote: The lookup table idea is a great one for ASCII strings... I was only thinking in terms of unicode.
The Microsoft Unicode version of that function (wcspbrk) doesn't use a lookup table - it uses nested loops, same as the OP, in straight C rather than assembly language - so Unicode would probably yield comparable performance.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
"you are searching the same string for the same charset, so why not cache the result?"
- This is a good idea but I think it will make a major difference in my test code which tasts the came string and the same charset, and there aren't much cases like that in the real world. but I'll try it - I don't have much options anyway!
"Perhaps it realized that your set of input characters is consecutive, and instead of iterating through them it tests each character against a range."
- I've tested it with mixed characters and letters - it's the same story, so I don't think that's the case - but I appreciate the idea!
"Perhaps the standard implementation starts from the end of the string and searches forward."
- I thought about that, it's quite possible. Why I didn't test it with mirrored string already?
"Perhaps the compiler realized that the standard call to FindOneOf in your loop could be optimized away since you are not using any values of that method call except the final one"
- Well I looked at the disassembly so I can tell that's not the case - for each iteration of the test loop it calls CString::FindOneOf which then calls a bunch of other functions, somewhere in those functions I met call of "EnterCriticalSection" which blown my mind... ?!
Otherwise you are quite right that standard functions are often optimized (as I mentioned about strlen) and the self-coded ones are not, especially those with __asm in them!
I think the lookup table has a potential!
|
|
|
|
|
I think Stuart's advice to use the C runtime in this case is the best alternative.
An algorithm can be optimized for different purposes and can also be better or worse in some scenarios. I guess the standard C runtime implementation is rather generic and predictable.
Try out the following algorithm and you'll see what I mean.
It is twice as fast as CString::FindOneOf() with the strings you're using for testing and measurement when it's release-built. On the other hand, adding a 'u' in the search string will make the algorithm 50% slower than CString::FindOneOf() . Surely you can optimize it further, but it will still have its weaknesses.
The key in the algorithm below is to do as little as possible with each char in the string to be searched.
int FindOneOf( const char* pString, const char* pSearch )
{
register char cMask = -1;
register char cPattern = *pSearch;
register int nPos;
int nSPos;
int nRet = -1;
for( nPos = 1; pSearch[nPos]; ++nPos )
{
cMask &= ~(pSearch[nPos - 1] ^ pSearch[nPos]);
cPattern |= pSearch[nPos];
}
for( nPos = 0; pString[nPos]; ++nPos )
{
if( (~(pString[nPos] ^ cPattern) & cMask) == cMask )
{
for( nSPos = 0; pSearch[nSPos]; ++nSPos )
{
if( pString[nPos] == pSearch[nSPos] )
{
nRet = nPos;
break;
}
}
if( nRet != -1 )
{
break;
}
}
}
return nRet;
}
Beware! The above is just an example at the top of my head!
"It's supposed to be hard, otherwise anybody could do it!" - selfquote "High speed never compensates for wrong direction!" - unknown
|
|
|
|
|
You actually bother yourself to write and test this code to help me - thank you so much!
I'll study it carefully!
|
|
|
|
|
Member 4399131 wrote: You actually bother yourself to write and test this code to help me
Well, yesterday must have been one of my good days and I guess the Fairy of Kindness must have hit my forehead with her wand since I got a bump...
Seriously, the algorithm is not very good in my opinion. Given certain preconditions it is actually faster than CString::FindOneOf() , but it can also be awfully slow. The whole idea with the algorithm I provided is to find at least one bit that are common for all characters to be searched for that is not very common in the string to be searched. In practice this means that searching for a digit (ASCII 0x30 - 0x39) in a string that only contains letters (ASCII >0x41) is quite fast, but searching for a 'b' in a string with small letters will be rather slow.
My point was that an algorithm can be optimized for certain preconditions and may prove faster than a generic one if those preconditions are satisfied, but may be much slower than the generic one if they are not satisfied.
That's why I think Stuart's advice is the best one, i.e. to go with the C runtime implementation. It is likely implemented to give the best overall performance with an algorithm far better than mine.
"It's supposed to be hard, otherwise anybody could do it!" - selfquote "High speed never compensates for wrong direction!" - unknown
|
|
|
|
|
Well I think I beat the standard runtime library!!!
I used the lookup table algorithm of strspn but I implemented it using my own tweaks. As result I got FindOneOf 15 - 25% faster than the standard!
and its idea is pretty close to yours.
in a 256 bits wide bit table(which is all zeroed) they set a bit for each ASCII simbol found. They use the character's ASCII code as index in this bit table. And then they just test for set bits using the target string's characters as indices in the bit table
modified on Wednesday, June 24, 2009 7:53 AM
|
|
|
|
|