|
Based on the Wiki page I thought that in the header all the strings have fixed sizes, can't remember now for the title, e.g. 30 characters. If it contains a shorter string, the rest is padded with zeroes or spaces. So e.g:
[TitleString ]
In this case, reading the string till the first space will get you the title correctly, "TitleString", as much as i understand what you did now is "reading" the string until you find a space or a zero. However, if the title is this:
[Title String ]
then you will get "Title" only, althorough "String" belonds there too, right? I think what you can do here is checking the very first character, if it is zero or space, the title is not set, if it is not, then read all 30 characters and then trim away the ending spaces if any. Also, in the following case:
[ThisIsALongTitleWithLotsaChars]
then when you are looking for the "ending space/zero", you will just skip over the string and read into whatever comes after it, since no spaces or zeroes are in the string. The specs doesn't say the strings are terminated with anything.
> The problem with computers is that they do what you tell them to do and not what you want them to do. <
> Leela: Fry, you're wasting your life sitting
in front of that TV. You need to get out and see the real world.
Fry: But this is HDTV. It's got better resolution than the real world <
|
|
|
|
|
Hello All,
I am always confused between static, dynamic, and reinterpret casting. I tried to find answers but somehow they all are confusing... Could anyone clears this concept.
Thanks.
|
|
|
|
|
|
Cool, Thanks... I'm checking
|
|
|
|
|
static_cast and reinterpret_cast work in the same way than the classic C-style cast, but they are used in two different context:
static_cast is used when casting between two related types, for instance from a numeric type to another. The conversion could generate a loss of data, but without the cast operator the code compile; the compiler could generate a warning.
char a, b;
int x = 123;
double y = 123.456;
a = static_cast<char>(x); b = static_cast<char>(y); reinpterpret_cast is used when casting between two unrelated types, usually pointers, for instance from a pointer to a struct to a pointer to char . Without the cast operator the code doesn't compile; the compiler generate an error.
POINT pt = { 10, 21 };
unsigned char *pb = reinterpret_cast<unsigned char *>(&pt);
for(int i = 0; i < sizeof(POINT); i++)
printf("%02X ", pb[i]);
Finally, dynamic_cast is a polymorphic cast operator; it's used only with pointers, and its main property is that if the conversion is possible it return the pointer value, otherwise it returns NULL . It's useful when working with inheritance.
|
|
|
|
|
Intend to capture data from network with huge volume. I intend to use multi-treading tech to do it.
1. producer:capture thread for data capturing. capture data in one loop and put incoming data into buffer
2. consumer:worker thread.get data from the producer and do more data processing.
here the producer shares buffers with consumers.I want to use 2 buffers to get a high performance. and the two buffers works in ping-pang mode like this ...
// the two buffers for sharing data
BUFFER bufferA;
BUFFER bufferB;
one scenario is ...producer put data into bufferA and the consumer read data from bufferB.
the producer use the buffer in procedure ...
write data...
bufferA---bufferB----bufferA----bufferB.... in ping-pang mode.
the consumer use the buffer in procedure....
read data...
bufferB---bufferA---bufferB---bufferA.....in ping-pang mode.
of course, the synchronizition is the biggest problem here. That's my question too.
How to implement the synchronization by using CRITICAL_SECTION/EVENT or other kernal objects to get a highest performance?
Appreciated for you any input here ...
Sam/BR.
The world is fine.
|
|
|
|
|
I don't see any benefit in using two buffers instead of one.
As about synchronization, see "Synchronization Objects" at MSDN (I would use a mutex).
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
CPallini wrote: I would use a mutex
Of all the things why a mutex? It's heavier than a critical section, and we're not even operating across process boundaries here!
There are some really weird people on this planet - MIM.
|
|
|
|
|
Because, for instance
Event, mutex, and semaphore objects can also be used in a single-process application, but critical section objects provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization doens't look to me a good counterpart to the scaring
Starting with Windows Server 2003 with Service Pack 1 (SP1), threads waiting on a critical section do not acquire the critical section on a first-come, first-serve basis. This change increases performance significantly for most code. However, some applications depend on first-in, first-out (FIFO) ordering and may perform poorly or not at all on current versions of Windows
("Critical Section Objects" at MSDN ).
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Man! You're in a mood to argue.
CPallini wrote: slightly faste
Only "slightly" faster, but put it in the context of execution - say thousands of times each seconds. That would make difference, especially in tight situations.
CPallini wrote: However, some applications depend on first-in, first-out (FIFO) ordering and may perform poorly or not at all on current versions of Windows
But then we're talking about reading data from sockets here - they aren't dependent on FIFO ordering. In fact, if an application is dependent on FIFO ordered execution, then that shouldn't worry far too much about performance. Because there's always a chance of having a 'pig' thread in between which would be slow, thereby rendering the whole process slow.
There are some really weird people on this planet - MIM.
|
|
|
|
|
Normally, I prefer CRITICAL_SECTION to do the sychronization. It's faster and easy to use. Exactly as Rajesh said, if no FIFO orderging requirement, critical section is a good choice internal process.
The world is fine.
|
|
|
|
|
I don't really get why you want to use two buffers for that. The two threads will work at different "speeds" so you won't have any control about which buffer is read or written. Why don't you simply use one buffer here ? The producer puts the data in one of the buffer (entering a critical section) and then signal an event that data is available. The consumer waits on the event to be signaled and then read data from the buffer (entering the critical section).
|
|
|
|
|
Thanks. I am using winpcap to capture raw data from network, but not using winsocket to do that.
Usually, I used one buffer to do it whit WinPCAP driver to capture data from ethernet switch. But the perforance is not satisfied.
With one buffer, the capture thread put data into the buffer after data arrived. And signal the consumer to read it from the buffer. Here I suspect the wait consumption will decrease the performance.So I intend to introduce another buffer to increase the performance. The capture thread and the processing thread can work paralelly in some method.
With one buffer ...Capture thread:
enter critical section
put data into buffer[]
signal the event to tell processing thread.
leave critical section
And the processing thread:
wait for single object and get signal...
enter critical section
read data into its buffer to release the shared buffer
leave critical section
I am thinking if I use 2 buffers(bufferA + bufferB). The capture thread and the procesing thread may work parallely. In worst case, one of them must wait one of the buffers release. But in normal busy case, each thread can access one of the buffer simutannously.
Like this...The capture thread want to put data into the buffer. It begins to use the buffer from bufferA. And then use bufferB. So it will use the buffers as a sequence A...B...A...B...A...B...If data incomes, it only need to check the target buffer status.If available, put data into it and signal the processing thread.
From the processing thread aspect, it will use the buffers as a sequence B...A...B....A...B...A...If get signal and it indicates data incoming to process. Then it will access target buffer to read data and release the signal to indicate the buffer is empty now and tell the capture thread its availability.
Ok. It's just my thought on this question. No practice till now. I want to get some expirenced advice and confirm the solution is ok before I start.
Anyway, thanks for any of your help here.
Sam/BR
The world is fine.
|
|
|
|
|
I did something almost exactly like this in a different context and it worked fine. In my case the buffers were rather large and took a while to fill so it was easy to manage them but your case might be different. You may need to have more than one buffer if the processing thread is not able to always keep up and you don't want the capture thread to wait so you may want to implement your code to support N buffers and then determine what the optimum value of N is later during testing. It may be 2 or it could be 200 or it could vary widely depending on many factors. I don't know nearly enough about what you are doing to predict.
Good Luck.
BTW - I read further replies in this thread and I want to clarify one thing. My implementation has multiple producers (max of about 10) and one consumer and each producer has a double buffer. I could have had one consumer for each producer but the buffers are very big and fill so slowly that this was unnecessary. My application is not a web server but its scaling requirements are very well known to us.
|
|
|
|
|
Hi Rick,
Thanks for your inputs. It's very helpful.
Yes. You are right. After furthur thinking on my context of 1-1 producer-consumer, I agree that I may need more buffer slots in the cache line. If incoming data is huge, the processing thread is not always able to keep up with. To avoid data lost, more buffer slot to hold the data is a good implementation in this situation. Eventually, the consumer thread will dispatch the data to different task thread do logical processing. So actually, in my context it is a variant 1-1 p/c model. The final consumer is the task threads which are responsible for logical analysis and processing. Producer thread feeds data to front line consumer which is a dispatching thread to dispatch incoming data to more consumers. And I want to use one double buffer between the producer and each consumer in front and end.
Thanks again.
Sam/Br.
Rick York wrote: I did something almost exactly like this in a different context and it worked fine. In my case the buffers were rather large and took a while to fill so it was easy to manage them but your case might be different. You may need to have more than one buffer if the processing thread is not able to always keep up and you don't want the capture thread to wait so you may want to implement your code to support N buffers and then determine what the optimum value of N is later during testing. It may be 2 or it could be 200 or it could vary widely depending on many factors. I don't know nearly enough about what you are doing to predict.
Good Luck.
BTW - I read further replies in this thread and I want to clarify one thing. My implementation has multiple producers (max of about 10) and one consumer and each producer has a double buffer. I could have had one consumer for each producer but the buffers are very big and fill so slowly that this was unnecessary. My application is not a web server but its scaling requirements are very well known to us.
The world is fine.
|
|
|
|
|
Samuel Zhao wrote: How to implement the synchronization by using CRITICAL_SECTION/EVENT or other kernal objects to get a highest performance?
You have a bigger problem than choosing between the available synchronisation mechanisms.
As I see it, you're using a synchronous socket and are trying to use threads to manipulate the data that comes in.
Please read up on sockets first, try and understand what is asynchronous socket and how it works. It may be of great help to you. Here's something to keep things in perspective: http://www.flounder.com/kb192570.htm[^] The link uses MFC (shut up, Carlo), but it throws some light on the subject.
Also, here's some very good stuff: http://beej.us/guide/bgnet/[^] (Thanks to that bloke named Moak).
There are some really weird people on this planet - MIM.
|
|
|
|
|
Hi Rajesh,
Thanks for your reply. I am using winpcap to capture raw data from network, not using winsock to do it.
Anyway, thanks for your input.
The world is fine.
|
|
|
|
|
You're speking of a technique called flip-flop double buffering:
while one thread writes data to one buffer, the other thread
reads from the second buffer, then you swap buffers and start again.
This is in fact a circular buffer with only two entries.
I attach here two classes, one for the double buffer, the other for the
n-buffer circular ring buffer, both of them may be used to solve the
communication problem between the two threads.
Note that the circular buffer works with pointers to buffers that you
allocate and free outside the buffer itself, while the double buffer
works with copying data from outside buffers to buffers preallocate
inside it.
You may also wish to get a read of the article:
Lock-Free Single-Producer - Single Consumer Circular Queue for the circular ring buffer
--- double buffer include ---
#ifndef DOUBLE_BUFFER_H
#define DOUBLE_BUFFER_H
#include <afx.h>
#include <afxwin.h>
class CDoubleBuffer
{
public:
CDoubleBuffer( unsigned int unAlloc = 0x000FFFFF );
~CDoubleBuffer(void);
void Write(void* pBuf, unsigned int unBytesTo);
void Read (void* pBuf, unsigned int unBytesFrom);
private:
void** pAlloc;
unsigned int unRead;
unsigned int unWrite;
unsigned int unSize;
};
#endif // ! defined (DOUBLE_BUFFER_H)
--- double buffer include end ---
--- double buffer body ---
#include "stdafx.h"
#include "DoubleBuffer.h"
CDoubleBuffer::CDoubleBuffer( unsigned int unAlloc )
{
pAlloc = NULL;
pAlloc = (void **) ::HeapAlloc(::GetProcessHeap(), 0, 2 * sizeof(void*));
if (!pAlloc) throw;
pAlloc[0] = NULL;
pAlloc[0] = (void *) ::HeapAlloc(::GetProcessHeap(), 0, unAlloc * sizeof(BYTE));
if (!pAlloc[0]) throw;
pAlloc[1] = NULL;
pAlloc[1] = (void *) ::HeapAlloc(::GetProcessHeap(), 0, unAlloc * sizeof(BYTE));
if (!pAlloc[1]) throw;
::FillMemory((void*) pAlloc[0], unAlloc, 0);
::FillMemory((void*) pAlloc[1], unAlloc, 0);
unRead = 0;
unWrite = 0;
unSize = unAlloc;
}
CDoubleBuffer::~CDoubleBuffer(void)
{
::HeapFree(::GetProcessHeap(), 0, pAlloc[0]);
::HeapFree(::GetProcessHeap(), 0, pAlloc[1]);
::HeapFree(::GetProcessHeap(), 0, pAlloc);
}
void CDoubleBuffer::Write(void* pBuf, unsigned int unBytesTo)
{
unsigned int unTryWrite = (unWrite++)%2;
while( unRead == unTryWrite ) { ::Sleep(10); }
::MoveMemory( pAlloc[unTryWrite], pBuf, __min(unBytesTo,unSize) );
unWrite = unTryWrite;
}
void CDoubleBuffer::Read(void* pBuf, unsigned int unBytesFrom)
{
while( unRead == unWrite ) { ::Sleep(10); }
unsigned int unTryRead = (unRead++)%2;
::MoveMemory(pBuf, pAlloc[unTryRead], __min(unBytesFrom,unSize));
unRead = unTryRead;
}
--- double buffer body end ---
--- circular buffer include ---
#ifndef CBUFFER_H
#define CBUFFER_H
#if _MSC_VER > 1000
#pragma warning (disable: 4786)
#pragma warning (disable: 4748)
#pragma warning (disable: 4103)
#endif /* _MSC_VER > 1000 */
#include <afx.h>
#include <afxwin.h>
#define CBUFFER_NELEM 0x0000FFFF
#define CBUFFER_ERRCODE_OK 0
#define CBUFFER_ERRCODE_NO_MEMORY 1
#define CBUFFER_ERRCODE_READ_ERROR 2
#define CBUFFER_ERRCODE_WRITE_ERROR 3
#define CBUFFER_ERRCODE_NO_DATA 4
#define CBUFFER_NULL_PTR 0xFFFFFFFF
class CCircBuffer
{
public:
CCircBuffer(UINT32 unElements = CBUFFER_NELEM);
~CCircBuffer(void);
public:
UINT32 GetTheData(void*& lpData);
UINT32 SetTheData(void* lpData);
UINT32 IncTheReadPtr(void);
UINT32 IncTheWritePtr(void);
bool IsReadyForRead(void);
bool IsReadyForWrite(void);
UINT32 GetLastReadPtr(void);
UINT32 GetLastWritePtr(void);
private:
void** TheCircularBuffer;
SIZE_T unElems;
UINT32 unReadPtr, unWritePtr;
UINT32 unLastErrorCode;
};
#endif /* ! defined(CBUFFER_H) */
--- circular buffer include end ---
--- circular buffer body ---
#include "StdAfx.h"
#include "CircBuffer.h"
CCircBuffer::CCircBuffer(UINT32 unElements )
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
unElems = (SIZE_T) unElements;
if ( unElems >= CBUFFER_NULL_PTR )
unElems = CBUFFER_NULL_PTR - 1;
TheCircularBuffer = (void **) HeapAlloc(GetProcessHeap(), 0, unElems * sizeof(void*));
if(!TheCircularBuffer)
{
unLastErrorCode = CBUFFER_ERRCODE_NO_MEMORY;
}
else
{
SecureZeroMemory(TheCircularBuffer, sizeof(TheCircularBuffer));
}
unReadPtr = unWritePtr = 0;
}
CCircBuffer::~CCircBuffer(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
BOOL bResult = HeapFree(GetProcessHeap(), 0, TheCircularBuffer);
if( bResult != TRUE )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_MEMORY;
}
}
UINT32 CCircBuffer::GetTheData(void*& lpData)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
if( ! IsReadyForRead() )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return CBUFFER_NULL_PTR;
}
lpData = (void*) TheCircularBuffer[unReadPtr];
unReadPtr++;
if( unReadPtr >= unElems )
unReadPtr = 0;
return unReadPtr;
}
UINT32 CCircBuffer::SetTheData(void* lpData)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
if( ! IsReadyForWrite() )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return CBUFFER_NULL_PTR;
}
TheCircularBuffer[unWritePtr] = (void*) lpData;
unWritePtr++;
if( unWritePtr >= unElems )
unWritePtr = 0;
return unWritePtr;
}
UINT32 CCircBuffer::IncTheReadPtr(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
if ( ! IsReadyForRead() )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return CBUFFER_NULL_PTR;
}
unReadPtr++;
if( unReadPtr >= unElems )
unReadPtr = 0;
return unReadPtr;
}
UINT32 CCircBuffer::IncTheWritePtr(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
if( ! IsReadyForWrite() )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return CBUFFER_NULL_PTR;
}
unWritePtr++;
if( unWritePtr >= unElems )
unWritePtr = 0;
return unWritePtr;
}
bool CCircBuffer::IsReadyForRead(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
UINT32 unCheckReadPtr = unReadPtr;
if( unCheckReadPtr >= unElems )
unCheckReadPtr = 0;
if( unCheckReadPtr == unWritePtr )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return false;
}
return true;
}
bool CCircBuffer::IsReadyForWrite(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
UINT32 unCheckWritePtr = unWritePtr;
unCheckWritePtr++;
if( unCheckWritePtr >= unElems )
unCheckWritePtr = 0;
if( unCheckWritePtr == unReadPtr )
{
unLastErrorCode = CBUFFER_ERRCODE_NO_DATA;
return false;
}
return true;
}
UINT32 CCircBuffer::GetLastReadPtr(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
return unReadPtr;
}
UINT32 CCircBuffer::GetLastWritePtr(void)
{
unLastErrorCode = CBUFFER_ERRCODE_OK;
return unWritePtr;
}
--- circular buffer body end ---
Hope that helps
Cheers
Federico
federico-strati [at] libero [dot] it
|
|
|
|
|
Ohh...Great.Thanks to Federico.
It really what I want to say... I am not sure whether it is good soluiton with double buffers working in circular way in my case.
----------------------------------------------------
>>You're speking of a technique called flip-flop double buffering:
>>while one thread writes data to one buffer, the other thread
>>reads from the second buffer, then you swap buffers and start again.
>>This is in fact a circular buffer with only two entries.
----------------------------------------------------
Thanks, Federico. I am studying your posted codes here.
The world is fine.
|
|
|
|
|
I read the CDoubleBuffer class you write. It's a very good implementation on the double buffer usage. Here I have some questions on the buffer class. Would you please give some comments?Thanks in advance.
1. Is it thread-safe of CDoubleBuffer? In Read/Write function, each of them will access the target buffer.If in multi-thread case, the reader thread will use Read() and the writer thread will use Write() simutaneously. Both of the function will access unRead/unWrite variables. Is it necessary to declare the variable with key word of volatile?
2. About the CCircBuffer, I have the same question of CDoubleBuffer. Besides, what's the purpose of the following two functions?
UINT32 IncTheReadPtr(void);
UINT32 IncTheWritePtr(void);
From the implemention of the two funcitons, I see it does nothing except only increasing the pointer. And the pointer increasement has been done in GetTheData()/SetTheData() functions.
Appreciated for your share. It's very usefull for me to understand the problem of my case furthermore. And I get big confidence to start my job on this case.
federico.strati wrote: #ifndef DOUBLE_BUFFER_H
#define DOUBLE_BUFFER_H
#include <afx.h>
#include <afxwin.h>
class CDoubleBuffer
{
public:
CDoubleBuffer( unsigned int unAlloc = 0x000FFFFF );
~CDoubleBuffer(void);
void Write(void* pBuf, unsigned int unBytesTo);
void Read (void* pBuf, unsigned int unBytesFrom);
private:
void** pAlloc;
unsigned int unRead;
unsigned int unWrite;
unsigned int unSize;
};
#endif // ! defined (DOUBLE_BUFFER_H)
|
|
|
|
|
Answers:
1. it is necessary to declare the variables volatile on some systems
to prevent using cached values, you'll be fine by declaring all the
integer indexes unRead/unWrite etc... as volatile
2. the classes are thread safe for the single situation where you have a single
producer (writer) and a single consumer (reader) in different
threads. It is NOT safe for multiple consumers / multiple producers.
3. The two functions IncTheReadPtr(void) and IncTheWritePtr(void) are there
only if you want to skip some entries in the circular buffer in reading
or writing for some particular reason you know.
Hope that helps
Cheers
Federico
|
|
|
|
|
Two changes on the code to make an ATOM operation.And seems that the following declaration should be declared as volatile.
volatile unsigned int unRead; <br />
volatile unsigned int unWrite;
<br />
void CDoubleBuffer::Write(void* pBuf, unsigned int unBytesTo)<br />
{ <br />
unsigned int unTryWrite = (unWrite+1)%2;<br />
while( unRead == unTryWrite ) { ::Sleep(10); } <br />
::MoveMemory( pAlloc[unTryWrite], pBuf, __min(unBytesTo,unSize) ); <br />
unWrite = unTryWrite;<br />
}<br />
<br />
void CDoubleBuffer::Read(void* pBuf, unsigned int unBytesFrom)<br />
{ <br />
while( unRead == unWrite ) { ::Sleep(10); } <br />
unsigned int unTryRead = (unRead+1)%2; <br />
::MoveMemory(pBuf, pAlloc[unTryRead], __min(unBytesFrom,unSize)); <br />
unRead = unTryRead;<br />
}<br />
If any error, please make me know.Thanks. Good job.
The world is fine.
|
|
|
|
|
Note:
1. it is necessary to declare the variables volatile on some systems
to prevent using cached values, you'll be fine by declaring all the
integer indexes unRead/unWrite etc... as volatile
2. the classes are thread safe for the single situation where you have a single
producer (writer) and a single consumer (reader) in different
threads. It is NOT safe for multiple consumers / multiple producers.
3. The modifs you've done are good even if don't understand why
you changed the way you increment the pointers
it is not important to increment pointers in an atomic manner
as there will be only two threads: one writer and one reader.
If you want atomicity then you shall use InterlockedIncrement
/ Decrement functions. But it is a wasted effort here.
Hope that helps
Cheers
Federico
|
|
|
|
|
Hello Federico,
Thanks for your great help. It makes me clear on the problems I met recently.
Many many thanks for your kind help.
Sam/Br.
federico.strati wrote: Note:
1. it is necessary to declare the variables volatile on some systems
to prevent using cached values, you'll be fine by declaring all the
integer indexes unRead/unWrite etc... as volatile
2. the classes are thread safe for the single situation where you have a single
producer (writer) and a single consumer (reader) in different
threads. It is NOT safe for multiple consumers / multiple producers.
3. The modifs you've done are good even if don't understand why
you changed the way you increment the pointers
it is not important to increment pointers in an atomic manner
as there will be only two threads: one writer and one reader.
If you want atomicity then you shall use InterlockedIncrement
/ Decrement functions. But it is a wasted effort here.
Hope that helps
Cheers
Federico
|
|
|
|
|
here you find the revised version of double buffer,
there were errors in my code, sorry!
-----------
void CDoubleBuffer::Write(void* pBuf, unsigned int unBytesTo)
{
unsigned int unTryWrite = (unWrite+1)%2;
while( unRead == unTryWrite ) { ::Sleep(10); }
::CopyMemory( pAlloc[unWrite], pBuf, __min(unBytesTo,unSize) );
unWrite = unTryWrite;
}
void CDoubleBuffer::Read(void* pBuf, unsigned int unBytesFrom)
{
while( unRead == unWrite ) { ::Sleep(10); }
::CopyMemory(pBuf, pAlloc[unRead], __min(unBytesFrom,unSize));
unRead = (unRead+1)%2;
}
-----------
|
|
|
|
|