|
<pre> I have a question regarding sockets. I am trying to write two programs that will establish connection between each other to send and receive massages. One program sets connection the code is the following:
WSADATA WsaData;
int err = WSAStartup (0x0101, &WsaData);
s = socket(AF_INET,SOCK_STREAM,0);
fer<<"socket error code="<<WSAGetLastError()<<endl;
SOCKADDR_IN anAddr;
anAddr.sin_family = AF_INET;
if (!cm.onemach)
anAddr.sin_port = htons(tcomp->m_port);
else
anAddr.sin_port = htons(1026);
if (!cm.onemach)
anAddr.sin_addr.S_un.S_addr = inet_addr(tcomp->m_ipadr);
else
anAddr.sin_addr.S_un.S_addr = inet_addr("127.0.0.1");
UINT TimeLimit=t;
BOOL connected;
if (connect(s, (struct sockaddr *)&anAddr, sizeof(struct sockaddr))==0)
connected=TRUE;
else
connected=FALSE;
fer<<"connecting error code="<<WSAGetLastError()<<endl;
if (!connected)
return FALSE;
{
err=send(s,sendMessage,strlen(sendMessage),0);
if (err!=0)
{
fer<<"sending error code="<<WSAGetLastError()<<endl;
}
}
And the other accepts connection:
WSADATA wsaData;
int wsaret=WSAStartup(0x0101,&wsaData);
if (wsaret == SOCKET_ERROR) AfxMessageBox("error");
while (flag)
{
servr= socket(AF_INET,SOCK_STREAM,0);
if (servr == INVALID_SOCKET)
{
flag=TRUE;
fer<<"getting message/ creating error code="<<WSAGetLastError()<<endl;
}
else
flag=FALSE;
}
anAddr.sin_family = AF_INET;
anAddr.sin_port = htons(1026);
anAddr.sin_addr.s_addr = INADDR_ANY;
err=-1;
while (err==-1)
{
err=bind( servr, (LPSOCKADDR)&anAddr, sizeof(anAddr) );
if (err!=0)
{
fer<<"binding error code="<<WSAGetLastError()<<endl;
Sleep(1000);
}
}
err=-1;
while (err!=0)
{
err = listen( servr, SOMAXCONN); //SOMAXCONN defined as 5
if (err!=0)
{
fer<<"listening error code="<<WSAGetLastError()<<endl;
Sleep(500);
}
}
SOCKADDR_IN from;
int fromlen=sizeof(from);
fer<<"before accepting error code="<<WSAGetLastError()<<endl;
//gets the address and port of remote/distant comp.
Recv=accept(servr,(struct sockaddr*)&from, &fromlen);//waits for not defined time
fer<<"accepting error code="<<WSAGetLastError()<<endl;
Last time, when I put the question I have been suggested to get WSAGetLastError(). Now I am trying this way and again the program sometimes works correct and sometimes doesn’t work and gives error 10061: Connection refused.
No connection could be made because the target computer actively refused it. This usually results from trying to connect to a service that is inactive on the foreign host—that is, one with no server application running.
Even so the second program waits on accept function the connection is not established. I will appreciate it greatly if anyone can tell me where the error comes or what can be the reason that sometimes it works and sometimes – no, can it be something with socket options or AF_INET address family. Thanks in advance.
|
|
|
|
|
Are you using some tutorial or sample for this? If not you should be. There are a million of them on the web. There have been for like a decade now. Your use of htons to set port numbers is totally unnecessary and suggests that you need to study more about using sockets. One web site that has existed since like 1995 is Sockets.com. You should check it out.
|
|
|
|
|
I have read various statements in this forum saying things like "it's more efficient to x" or "x is inefficient, I'd try y", or the more helpful "x is terribly inefficient, I'd look for a better method."
How do I determine what is efficent/inefficient, or, more specifically, given two algorithms to accomplish a task, how do I determine which is more efficient?
As a relevent example, I have an object whose member vars need to be backed up by a file, and needs to support multiple instances sharing a file (and therefore values); any time the member vars are modified, the file needs to be updated. File access has to be synchronized through IPC.
A first algorithm would be to synchronize file access within the main process, create the object after getting ownership of the sync object, read the file contents after object creation, modify the vars, write to the file, then destroy the object.
A second algorithm would be to have the main process create the object and maintain it through the main process' lifetime, have the IPC synchronization handled within the object whenever the vars are modified.
A third would be to have the main process create and maintain the object, handle synchronization within the main process, and pass a file handle from the main process to the object methods that modify the vars so they can modify the file.
A fourth would be to have the main process create and maintain the object, handle synchronization within the main process, and have the object use a memory-mapped file to maintain the vars.
A fifth would be the same as the fourth, only handling synchronization within the object.
...and I'm sure I could come up with more...
My limited experience has me leaning towards either the second or the fifth as theoretically most-efficient, but how would I go about actually determining this ?
As always, any answers, or suggestions what I should read, are greatly appreciated.
MZR
P.S.
Then, of course, there's also the question of "execution efficiency" (speed) and "memory efficiency." Is the most memory-efficient always the fastest-executing?
|
|
|
|
|
Mike the Red wrote: Then, of course, there's also the question of "execution efficiency" (speed) and "memory efficiency." Is the most memory-efficient always the fastest-executing?
Depends what you mean by 'memory'. If you mean 'program size', then with modern processors, small code can often mean fast code, due to cache effects. If you mean 'data size', well, possibly not. For example, lots of algorithms can be sped up using lookup tables.
As for your specific issue, I'd probably map the backing file and rather than have member variables in the object, have member accessors that read/write from the mapped file. Probably use a mutex for synchronising file writes. I wouldn't bother synchronising the file reads unless you either a) want to read a big thing from the map, or b) need to keep multiple variables values in sync.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Thanks for your response, Stuart.
What I meant to refer to by 'memory efficiency' is efficient use of memory in terms of amount used and how memory is allocated and freed.
I don't know that the speed effect would be sufficient to be measured, but, in terms of how memory is used, this:
for (int i =0; i < someNum; i++) {
BOOL b = SomeFunction(i);
if ( b )
}; ..is not as efficient as this:
BOOL b;
for (int i = 0; i < someNum; i++) {
b = SomeFunction(i);
if ( b )
}; ..because b is only declared/allocated once, instead of once per iteration.
|
|
|
|
|
Heh - when it's stack allocation like your example, there is no speed difference whatsoever. Heap allocation? Yes, there will be overhead for memory management, but stack allocation can be considered near enough zero cost, as stack allocation basically consists of decrementing the stack pointer by the desired byte count.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
|
Mike the Red wrote: How do I determine what is efficent/inefficient, or, more specifically, given two algorithms to accomplish a task, how do I determine which is more efficient?
you have to identify at all the different actions each algorithm takes, how many times each action is performed for a given input, and the relative costs of those actions. then you can see which operations dominate the processing time for each algorithm. knowing that will help you get an idea as to which algorithm is going to do better for a given input. it might turn out that one algorithm is very fast for small input, but is terrible for large input; so you can switch between them.
Mike the Red wrote: Is the most memory-efficient always the fastest-executing?
no. and for reasonable amounts of memory, there's probably very little correlation between memory usage and performance. if using more memory lets you turn an O(N2) algorithm (execution time increases with the square of the input size) into a O(1) algorithm (execution time is constant, regardless of the size of the input), use the memory.
Big O Notation[^]
for example[^]
|
|
|
|
|
I have a question regarding sockets. I am trying to write two programs that will establish connection between each other to send and receive massages. One program sets connection the code is the following:
WSADATA WsaData;
int err = WSAStartup (0x0101, &WsaData);
s = socket(AF_INET,SOCK_STREAM,0);
fer<<"socket error code="<<WSAGetLastError()<<endl;
SOCKADDR_IN anAddr;
anAddr.sin_family = AF_INET;
if (!cm.onemach)
anAddr.sin_port = htons(tcomp->m_port);
else
anAddr.sin_port = htons(1026);
if (!cm.onemach)
anAddr.sin_addr.S_un.S_addr = inet_addr(tcomp->m_ipadr);
else
anAddr.sin_addr.S_un.S_addr = inet_addr("127.0.0.1");
UINT TimeLimit=t;
BOOL connected;
if (connect(s, (struct sockaddr *)&anAddr, sizeof(struct sockaddr))==0)
connected=TRUE;
else
connected=FALSE;
fer<<"connecting error code="<<WSAGetLastError()<<endl;
if (!connected)
return FALSE;
{
err=send(s,sendMessage,strlen(sendMessage),0);
if (err!=0)
{
fer<<"sending error code="<<WSAGetLastError()<<endl;
}
}
And the other accepts connection:
WSADATA wsaData;
int wsaret=WSAStartup(0x0101,&wsaData);
if (wsaret == SOCKET_ERROR) AfxMessageBox("error");
while (flag)
{
servr= socket(AF_INET,SOCK_STREAM,0);
if (servr == INVALID_SOCKET)
{
flag=TRUE;
fer<<"getting message/ creating error code="<<WSAGetLastError()<<endl;
}
else
flag=FALSE;
}
anAddr.sin_family = AF_INET;
anAddr.sin_port = htons(1026);
anAddr.sin_addr.s_addr = INADDR_ANY;
err=-1;
while (err==-1)
{
err=bind( servr, (LPSOCKADDR)&anAddr, sizeof(anAddr) );
if (err!=0)
{
fer<<"binding error code="<<WSAGetLastError()<<endl;
Sleep(1000);
}
}
err=-1;
while (err!=0)
{
err = listen( servr, SOMAXCONN); //SOMAXCONN defined as 5
if (err!=0)
{
fer<<"listening error code="<<WSAGetLastError()<<endl;
Sleep(500);
}
}
SOCKADDR_IN from;
int fromlen=sizeof(from);
fer<<"before accepting error code="<<WSAGetLastError()<<endl;
//gets the address and port of remote/distant comp.
Recv=accept(servr,(struct sockaddr*)&from, &fromlen);//waits for not defined time
fer<<"accepting error code="<<WSAGetLastError()<<endl;
Last time, when I put the question I have been suggested to get WSAGetLastError(). Now I am trying this way and again the program sometimes works correct and sometimes doesn’t work and gives error 10061: Connection refused.
No connection could be made because the target computer actively refused it. This usually results from trying to connect to a service that is inactive on the foreign host—that is, one with no server application running.
Even so the second program waits on accept function the connection is not established. I will appreciate it greatly if anyone can tell me where the error comes or what can be the reason that sometimes it works and sometimes – no, can it be something with socket options or AF_INET address family. Thanks in advance.
|
|
|
|
|
Could you please format properly (use 'code block' button) you code snippets?
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
ok, i'll put it again
|
|
|
|
|
It looks like you failed to, please edit...
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Hi,
My system is having two versions of same DLL located under C:\Windows & C:\Windows\System32 folders.
According to LoadLibrary MSDN documentations, the search path to load DLL will consider System32 folder first and then Windows folder. But when I use LoadLibraryA, it default load the DLL from C:\Windows folder. Whereas, when I use LoadLibraryW, it default load the DLL from C:\Windows\System32 folder.
I googled for the difference between LoadLibraryA & LoadLibraryW and found that one is ASCII version whereas other is UNICODE version.
Now my question is, why LoadLibraryA loads DLL from C:\Windows folder and LoadLibraryW load DLL from C:\Windows\System32 folder???
Regards,
Chirag.
|
|
|
|
|
small info from msdn : LoadLibraryA -ANSI and LoadLibraryW - Unicode
ANSI controls, which work on all Win32 operating systems, allow for maximum portability between the various Win32 operating systems. Unicode controls work on only Windows NT (version 3.51 or later), but not on Windows 95 or Windows 98. If portability is your primary concern, ship ANSI controls. If your controls will run only on Windows NT, you can ship Unicode controls. You could also choose to ship both and have your application install the version most appropriate for the user's operating system.
|
|
|
|
|
LoadLibraryA and LoadLibraryW are exported by the same kernel32.dll and there will only be one copy of that in C:\Windows\System32.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|
Thanks for the reply. But still my confusion is:
Why LoadLibraryA loads DLL from C:\Windows folder and LoadLibraryW loads DLL from C:\Windows\System32 folder?
|
|
|
|
|
I misinterpreted your question.
Sorry about that.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|
Because of the A und the W at the end. They are resolving the input strings in different kind. You should explicitly call them with the approbiate string variable (without cast).
ie: CStringA sa = "..."; CStringW sw = L"...";
You better use LoadLibrary() so the compilet will choose the right version
Press F1 for help or google it.
Greetings from Germany
|
|
|
|
|
Hi, I am new to malloc and how it works. I have a pointer which I use as a "variable sized" array. When I try and free the memory that was allocated by malloc I get a "free(): invalid pointer: 0x0804b008" error". Does anyone know what the cause of this is? The memory is allocated deallocated as follows.
A function called initMatrices(); is called.
void initMatrices()
{
//Initialize the valid states array
forward_model.valid_states = (char *)malloc(2*forward_model.number_of_states*sizeof(*forward_model.valid_states));
}
forward_model is the name of the structure containing the array valid_states.
Then at the end of the program the function recycleMatrices(); is called.
void recycleMatrices
{
//Return the memory that was being used by the valid states array
free(forward_model.valid_states);
}
Am I doing something wrong? From all the tutorial/help pages I have found while googling this seems to be how the memory is freed, except none of the examples make use of a struct.
Any help would be appreciated.
Thanks
dcj
|
|
|
|
|
Probably (in the code between malloc and free ) you mess up the forward_model.valid_states pointer. Using the debugger, check out the pointer value just after malloc is executed and compare it with its value just before free is executed.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
One of the possible cause of this kind of error is that memory may be corrupted or forward_model.valid_states may be NULL. Check forward_model.valid_states array variable for not NULL condition.
Hope this may helps.
|
|
|
|
|
Check if forward_model is the same copy that is used for both malloc and free.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|
Additionally to what others here already suggested also make sure you are not trying to free the same memory block multiple times.
> The problem with computers is that they do what you tell them to do and not what you want them to do. <
> Life: great graphics, but the gameplay sux. <
|
|
|
|
|
I am trying to make Windows render the mouse cursor with a user-defined delay (in milliseconds). For ex., if the user-defined delay is 100ms, the mouse should be rendered 100ms after the user moved the mouse (If you are wondering what is the use-case for this, I am doing this as a part of UI/Usability study that we are doing internally).
As a system sw guy, my initial inclination was "lets put something next to mouclass drv", but wanted to check out the user mode first (since the code most prob will be picked up by another app-only developer). I tried the low-level hooks from an article here in CP...it worked fine for tracking the mouse, but I didn't know how to introduce the delay...
Any ideas? TIA.
|
|
|
|
|
The system sends the WM_SETCURSOR message to a window if the mouse is moved.
You could probably hook this message using SetWindowsHookEx(WH_CALLWNDPROC, ... and introduce the delay there.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|