|
Hello, everyone
I've wrote a code which will find the intersection of two given Linked lists sorted in increasing order.
Actually the code is running fine now, but I find a problem during compiling, but I dont know the reason,
so, hope someone know what's cause the problem. Thank you!
Problem is, if I put /* dummy.next = NULL; */ in front of /* struct node* tail = &dummy; */,
there will be errors, otherwise, problem running fine. Following is the function I wrote.
Thank you all.
// create a new list with the intersection of two given lists sorted in increasing order
struct node* SortedIntersect(struct node* head1, struct node* head2)
{
struct node dummy;
struct node* tail = &dummy;
dummy.next = NULL;
while(head1 != NULL && head2 != NULL)
{
if(head1->data == head2->data)
{
Push(&(tail->next), head1->data);
tail = tail->next;
head1 = head1->next;
head2 = head2->next;
}
else if(head1->data > head2->data)
{
head2 = head2->next;
}
else
{
head1 = head1->next;
}
}
return dummy.next;
}
modified on Wednesday, September 23, 2009 2:35 PM
|
|
|
|
|
is this C or C++?
In C, all the variable declaration should be wrttien in the begining of the function before any other code.
|
|
|
|
|
I am attempting to write some atomicInc functions that take 16 or 32 bit signed arguments, and call the appropriate _InterlockedIncrement intrinsic function when compiled in Visual Studio 2008. However, it appears the function _InterlockedIncrement takes a 64-bit integer on my system (Windows XP SP2 64-bit Pro), even though the documentation states that _InterlockedIncrement is the 32-bit version of the function call.
My question is, what intrinsic should I use to atomically increment a 32-bit value? The following is producing an error:
#include <intrin.h>
#pragma intrinsic (_InterlockedIncrement, _InterlockedIncrement16)
__int16 atomicInc(__int16 *val) {
return _InterlockedIncrement16(val);
}
__int32 atomicInc(__int32 *val) {
return _InterlockedIncrement (val);
} Any help is greatly appreciated. Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
modified on Tuesday, September 22, 2009 5:37 PM
|
|
|
|
|
A failing on MS part i think.
http://msdn.microsoft.com/en-us/library/29dh1w7z(VS.80).aspx[^]
__in32 == int, int != long (as data types are concerned).
They should have made __int32 == long.
You have 2 choices:
__int32 atomicInc(__int32 *val) {
return (__int32)_InterlockedIncrement((long*)val);
}
or
__int32 atomicInc(long *val) {
return (__int32)_InterlockedIncrement(val);
}
...cmk
The idea that I can be presented with a problem, set out to logically solve it with the tools at hand, and wind up with a program that could not be legally used because someone else followed the same logical steps some years ago and filed for a patent on it is horrifying.
- John Carmack
|
|
|
|
|
cmk wrote: __int32 atomicInc(__int32 *val) {
return (__int32)_InterlockedIncrement((long*)val);
}
Do __int32 and __int64 variables have the same alignment requirements in memory? I was under the impression that an __int64 had to have 8-byte alignment, whereas an __int32 had to have only 4-byte alignment. Therefore, wouldn't I have a 50% chance of getting a runtime error when attempting to dereference the __int64*? Will converting an "__int32*" to an "__int64*" always guarantee the bits from the original digit are in the least-significant position, or does this depend on hardware endianess? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Not sure I understand your question, how did __int64 enter the discussion ?
_InterlockedIncrement is for 32bit values only (long).
A long is 4 bytes on both 32bit and 64bit systems.
...cmk
The idea that I can be presented with a problem, set out to logically solve it with the tools at hand, and wind up with a program that could not be legally used because someone else followed the same logical steps some years ago and filed for a patent on it is horrifying.
- John Carmack
|
|
|
|
|
I didn't realize that type int is logically equivalent to type long... I thought a long was 64-bits (as you can guess, I always use the __intN types for integers so I know how many bits I have to work with)
So if I understand correctly, long and int are both 32-bit signed integer types, are built into ANSI C as keywords, but the compiler can't resolve this equivalence at compile-time? I'm confused... why can't the compiler figure out this equivalence without explicit type conversions? Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
modified on Tuesday, September 22, 2009 5:21 PM
|
|
|
|
|
A float is also 4 bytes. They are different types - period, it doesn't matter if they are both integer types.
http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.80).aspx[^]
...cmk
The idea that I can be presented with a problem, set out to logically solve it with the tools at hand, and wind up with a program that could not be legally used because someone else followed the same logical steps some years ago and filed for a patent on it is horrifying.
- John Carmack
|
|
|
|
|
Jeff,
Skippums wrote: I am attempting to write some atomicInc functions that take 16 or 32 bit signed arguments, and call the appropriate _InterlockedIncrement intrinsic function when compiled in Visual Studio 2008.
This statement shows that you fundamentally do not understand atomicity. While it is probably true that the compiler will optimize the functions above directly into a call to _InterlockedX functions. Your atomic operations have just become subject to compiler optimizer probability.
You need to call the Interlocked functions directly. Do not call them through a proxy function.
Best Wishes,
-David Delaune
|
|
|
|
|
I don't understand why, if not actually placed inline, the new code runs the risk of not being thread safe. Can you explain this? As far as I can tell, the value will still be modified and returned by value atomically. Therefore, no matter what the optimizer does, I still always get the correct value returned as well as the correct value set. Seems logical to me, unless the compiler can optimize the intrinsic function to return by reference, which means that the intrinsic is incorrectly implemented. Please let me know how this could possibly not be thread safe.
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
Jeff,
A context switch[^] can occur at the top of your atomicInc function. Let me go into detail:
For example this simple program:
volatile long m_Lock;
__int32 atomicInc(volatile long * val)
{
return _InterlockedIncrement(val);
}
int _tmain(int argc, _TCHAR* argv[])
{
atomicInc(&m_Lock);
return 0;
}
Lets compile with /FAs and inspect the assembler output:
; 13 : atomicInc(&m_Lock);
push OFFSET ?m_Lock@@3JC ; m_Lock
call ?atomicInc@@YAHPCJ@Z ; atomicInc
add esp, 4
atomicInc@@YAHPCJ@Z PROC ; atomicInc, COMDAT
; 7 : {
push ebp
mov ebp, esp
sub esp, 64 ; 00000040H
push ebx
push esi
push edi
; 8 : return _InterlockedIncrement(val);
mov eax, DWORD PTR _val$[ebp]
mov ecx, 1
lock xadd DWORD PTR [eax], ecx
inc ecx
mov eax, ecx
; 9 : }
pop edi
pop esi
pop ebx
mov esp, ebp
pop ebp
ret 0
The _InterlockedIncrement functions should not be wrapped. By wrapping them you are potentially losing atomicity. You should pray that your compiler optimizes your code and removes the atomicInc function call. (It probably will in Release mode) But keep in mind your code is not atomic at all. Here is what I am saying in laymens terms:
1.) The code you presented is not atomic.
2.) The compiler may optimize and fix the bug you have created.
Good Luck,
-David Delaune
|
|
|
|
|
If I understand correctly, you are suggesting the following sequence of events:
1) I call my wrapper method in thread1 passing &m_Lock, but somewhere prior to executing the atomic increment, there is a context switch to thread2.
2) Something in thread2 modifies the value of m_Lock, then switches control back to thread1
3) thread1 then increments the new value of m_Lock
Unless I am missing something, the critical section of these operations is still atomic. My method doesn't know what the value of m_Lock is, so whether I increment it before or after thread2 performs the modification is inconsequential. The only thing that could break this would be if &m_Lock is volatile, which could break the code even if calling the intrinsic directly.
Sure, my function won't necessarily be called and return without a context switch, but it doesn't effect (as far as I can tell) the correctness of the code. Is there something else I am still missing? Thanks for your help thus far,
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I'm coding an mfc active x that gets raw data and appends a header so far I've got it displaying the image but there are elements of green in what should be a grey scale image. I'm using a RGB header and assume that the image has shades of grey beyond that allowed in a 256 bit colour image. So how do i get the green to display as shades of grey. I haven't been able to find a method that creates a grey scale image that is y i'm using the RGB. From what i can see there is none and i need to add a palette to the image which i tried to do but can't even get the shades of green to change to a different colour.
Here is my code if anyone can help me out it would be greatly appreciated.
CClientDC dc(this);
BITMAPINFO bmi;
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = 208;
bmi.bmiHeader.biHeight = 208;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 8;
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = 0;
bmi.bmiHeader.biXPelsPerMeter = 0;
bmi.bmiHeader.biYPelsPerMeter = 0;
bmi.bmiHeader.biClrUsed = 0;
bmi.bmiHeader.biClrImportant = 0;
LPPALETTEENTRY pPalEntry = NULL;
RGBQUAD *pRGB = NULL;
BOOL ret = TRUE;
CPalette palette;
int npalColors = palette.GetEntryCount();
pPalEntry = new PALETTEENTRY[npalColors];
pRGB = new RGBQUAD[npalColors];
palette.GetPaletteEntries(0, npalColors, pPalEntry);
for (int i = 0; i < npalColors; i++)
{
pRGB[i].rgbRed = pPalEntry[i].peRed;
pRGB[i].rgbGreen = pPalEntry[i].peRed;
pRGB[i].rgbBlue = pPalEntry[i].peRed;
pRGB[i].rgbReserved = 2;
}
CBitmap bitmap;
bitmap.CreateCompatibleBitmap(&dc,208, 208);
::SetDIBits(dc.m_hDC,bitmap,0,208,m_byImg2,&bmi,DIB_RGB_COLORS);
CDC dcMemory;
dcMemory.CreateCompatibleDC(&dc);
CBitmap * pOldBitmap = dcMemory.SelectObject(&bitmap);
SetDIBColorTable(dcMemory, 0, npalColors, pRGB);
dc.StretchBlt(0,0, m_nW,m_nH, &dcMemory, 0,0,208,208 , SRCCOPY);
|
|
|
|
|
I think you just want to change green color to gray, is it correct?
if yes, you need to change green color in color table to gray, do not need to worry other data.
|
|
|
|
|
Don't have VS installed, so I can't have a poke around with your code, though the immediate thing that springs to mind is:
Why don't you just desaturate the image pixel-by-pixel.
I.e convert the RGB value of each pixel to HSL. Then change the S channel to 0.0 and convert back to RGB - voilla! One monochrome picture.
|
|
|
|
|
Firstly thank you for your quick responses I'm new to image processing and although I've done .net c++ programming I've only been working in MFC for a couple of months now so this is proving to be a challenge for me(good thing i enjoy a good challenge).
I'll look into your suggestions now. @includeh10 i'm getting other colours comming through its just that the green is fairly prominent if i can get rid of this it would be good but first prize is getting a complete greyscale image to display. The problem is short of changing the pixel value i'm don't know another way to do this.
@enhzflep The raw data is a finger print coming over USB and I've tried converting this to black and white in...paint...yes the one that comes with windows. Anyway allot of the detail seemed to be lost and would like to shy aways from turning it into a black and white image if possible. I'll give it a go in code and see how it turns out though.
I'll let you know how things turn out thanks again for your responses.
|
|
|
|
|
Just to clarify, I think I've used monochrome incorrectly. With the method I suggest, I take the RGB values that I use to draw some custom controls and adjust the H channel only, giving me the ability to tint color-schemes to any color I want.
If you load your image into Gimp or Photoshop or whatever then desaturate the image, you'll get a perfect grey-scale image.
It's just a matter of understanding and then working in either HSL or HSV colorspace. By changing the S channel to 0 in either model, you effectively remove all color information, leaving only darkness/lightness information remaining.
Here, have a gander if you please: http://en.wikipedia.org/wiki/HSL_and_HSV
|
|
|
|
|
I realized what you said after a bit more fiddling it is a good idea and i just need to figure out how to do it ^.^
I'm starting to run out of time though so had to do a bit of a hack. I'll be working on this again when i find the time though i was enjoying it. Here's what i did just in case it helps anyone else.
I get the bytes one byte at a time so this is inside that loop. It could obviously be in a loop of its own just as easily though.
if(data[a] == 208)
{
data[a] = 75;
}
What this does is turn any colour that i what to suppress into black.
The black is somewhere between 192 and 208.
This Creates the bitmap once i have the raw pixels in the array
CClientDC dc(this);
BITMAPINFO bmi;
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = 208;
bmi.bmiHeader.biHeight = 208;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 8;
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = 0;
bmi.bmiHeader.biXPelsPerMeter = 0;
bmi.bmiHeader.biYPelsPerMeter = 0;
bmi.bmiHeader.biClrUsed = 0;
bmi.bmiHeader.biClrImportant = 0;
CBitmap bitmap;
bitmap.CreateCompatibleBitmap(&dc,208,208);
::SetDIBits(dc.m_hDC,bitmap,0,208,rawImageArray,&bmi,DIB_RGB_COLORS);
CDC dcMemory;
dcMemory.CreateCompatibleDC(&dc);
CBitmap * pOldBitmap = dcMemory.SelectObject(&bitmap);
dc.StretchBlt(0,0, m_nW,m_nH, &dcMemory, 0,0,208,208 , SRCCOPY);
dcMemory.SelectObject(pOldBitmap);
|
|
|
|
|
This is a really delayed response but a bug caused me to revisit this code and with a little bit more experience in using MFC i approached it with a bit more confidence :P
Solved the problem though, it ended up being a really easy fix and i have "AlexFM" for his post on experts exchange to thank.
I needed to add a palette this solution eluded me because of the need for an array for the bitmap info.
here's my working code i hope it will help someone
CClientDC dc(this);
BITMAPINFO *bmi = (BITMAPINFO*)new char[sizeof(BITMAPINFO) + sizeof(RGBQUAD)*256];
bmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi->bmiHeader.biWidth = 176;
bmi->bmiHeader.biHeight = 176;
bmi->bmiHeader.biPlanes = 1;
bmi->bmiHeader.biBitCount = 8;
bmi->bmiHeader.biCompression = BI_RGB;
bmi->bmiHeader.biSizeImage = 0;
bmi->bmiHeader.biXPelsPerMeter = 0;
bmi->bmiHeader.biYPelsPerMeter = 0;
bmi->bmiHeader.biClrUsed = 0;
bmi->bmiHeader.biClrImportant = 0;
for (int i = 0; i < 256; i++ )
{
bmi->bmiColors[i].rgbBlue = i;
bmi->bmiColors[i].rgbGreen = i;
bmi->bmiColors[i].rgbRed = i;
bmi->bmiColors[i].rgbReserved = 0;
}
CBitmap bitmap;
bitmap.CreateCompatibleBitmap(&dc,176,176);
::SetDIBits(dc.m_hDC,bitmap ,0 ,176 ,rawImageData ,bmi , DIB_RGB_COLORS);
CDC dcMemory;
dcMemory.CreateCompatibleDC(&dc);
CBitmap * pOldBitmap = dcMemory.SelectObject(&bitmap);
dc.StretchBlt(0,0, m_nW,m_nH, &dcMemory, 0,0,176,176 , SRCCOPY);
dcMemory.SelectObject(pOldBitmap);
|
|
|
|
|
Hi
To create installer (Install shield) please let me know if following files are sufficient or not
Project related files:
Exe file
*.BSC
Help file
Contents file
DLL’s:
ADVAPI32.DLL
COMCTL32.DLL
COMDLG32.DLL
GDI32.DLL
KERNEL32.DLL
MSVCRT.DLL
NTDLL.DLL
OLE32.DLL
OLEAUT32.DLL
OLEDLG.DLL
OLEPRO32.DLL
RPCRT4.DLL
SECUR32.DLL
SHELL32.DLL
SHLWAPI.DLL
USER32.DLL
WINMM.DLL
WINSPOOL.DRV
(Above DLLs I got from Dependency walker)
Do we need to include MFC runtime and .Net framework while creating installer?
Thanks
|
|
|
|
|
|
Thanks for the information. It helped me in identifying the DLLS used by the project.
Do .Net frame work and MFC runtime is required to run the exe developed using Microsoft Visual C++ 6.0.
Thanks
kavitha
|
|
|
|
|
Hello,
I want to convert char* to char[100]
how do i go about it
Pritha
|
|
|
|
|
char buf[100];
strncpy(buf, pChar, 99);
buf[99] = 0;
|
|
|
|
|
Why would you ever need such a conversion?! What are you trying to do and did the compiler give you such a conversion error? Give that context here and someone might be able to help you.
It is a crappy thing, but it's life -^ Carlo Pallini
|
|
|
|
|