|
I know concepts of message_passing decoding or bit_flipping decoding techniques. So i am trying C codes for message_passing decoding or bit_flipping decoding.
|
|
|
|
|
Actually i am not getting about the type of decoder concerned with partially parallel or fully serial bcoz I have to implement the LDPC decoder with the help of belief propagation algorithm.In that process i have been provided with a H matrix,code word like [0 0 1 0 1 1] for a 4*6 matrix.The received word is r=[1 0 1 0 1 1] with a cross over probability of p=0.2.
Now the steps which i have to follow for computation is as follows:-
1.in the first step using logp/1-p i have to find out the received word and it comes like
r=[-1.3863,1.3863,-1.3863,1.3863,-1.3863,1.3863]
2.Since it is a 4*6 matrix
M11=r1=-1.3863 and M31=r1=-1.3863
for i=2,M12=r2=1.3863 and M22=r2=1.3863
.
.
.
.
for i=6 M36=r6=-1.3863 and M46=r6=-1.3863
3.Extrensic information.It is calculated by using formula
E11=log(1+tanh(M12/2)tanh(M14/2)/1-tanh(M12/2)tanh(M14/2))
Like that i have to calculate for eacha nd every node and finally a E matrix has been formed
4.calculation of LLR
L1=r1+E11+E31=some value
.
.
.
.
L6=some value
finally on the basis of BPSK scheme
Z=[0 0 1 0 1 1]
5.To check if Z is a valid codeword
S=Z.H transpose
if it comes like [0 0 0 0] then my code word is correct otherwise have to go for next iteration.
Please tell me how should i proceed.Its a total mathematical calculation.I am stuck with the point of taking matrix as a input.If i take it as an array also then how should i calculate that tanh calculation.
|
|
|
|
|
You have totally lost me ... you are giving me the formulas
tanh is a standard C/C++ function (C library function - tanh)
You have the matrix coefficents and like you said the E matrix first coefficent is
E11=log(1+tanh(M12/2)tanh(M14/2)/1-tanh(M12/2)tanh(M14/2))
So plug the numbers in and calculate your E matrix coefficents
You do all your other calculations and then transpose the matrix .. you know
C Program to Find Transpose of a Matrix]
You need to explain this statement => "how should i calculate that tanh calculation". I simply don't get it tanh is a trig function like sine/cosine/tan and it just needs "#include <math.h>" and you have it on any C/C++ compiler that meets the oldest C89 standard.
You gave me coefficent values above so I know you know what M12,M14 are and you know what a 2 is.
So I query why you ask me how do you do the tanh calculation, why didn't you ask me how you do the log calculation? What is the difference between those that is causing your problems?
In vino veritas
modified 22-Jun-17 16:52pm.
|
|
|
|
|
Hi
I had a block of code in my project that was repetitive. So I decided to make it a MACRO
In my block of code I had a For loop, which needed int i; as a variable
in order to make it unique I tried to use __COUNTERS__ macro
e.g. i##_COUNTERS__. I was never able to get the code to compile
I came upon a alternate solution wrapping the code around curly braces
As I remember anything declared within curly braces is unique to that block
of code. Am I correct in this
thanks
|
|
|
|
|
If I understand you, then yes - a variable declared inside {}'s is local to that block of code (or "scope").
So, you could do something like:
#define DOALOOP(x) { for (int i = 0; i < X; i++) { DoSomething (i); } }
and you would not mess up any other "i"s there. This is called shadowing, and can be powerful... Or causes confusion if you do it by accident!
I am one of "those foreigners coming over here and stealing our jobs". Yay me!
|
|
|
|
|
Avoid complex macros. Search the web for something like "c++ macro vs function" or "c++ avoid macros" to know about the problems.
When using C++ and a macro should be type independant, use templates instead. Otherwise just use (inline) functions. These are the common methods to implement repetitive code.
|
|
|
|
|
+10 to Jochen's answer
Complex Macros are evil and usually cause problems down the track, avoid them.
If you are really sensitive inline the code but realistically all modern compilers like GCC and VC++ will inline anything you spot anyhow. They also have setting to control the behaviour.
On GCC -O2 will inline blocks that a heuristic analysis code determines is worth it and results in smaller code. With -O3 will inline blocks that the heuristic analysis code determines is worth it regardless of size.
It also has an attribute to add to a code block __attribute__((always_inline)) which will force an inline whenever used.
In VC++ the setting is under optimization control called "Inline function Expansion", the setting "any suitable" will allow the compiler to inline any code it decides is worth it.
VC++ carries __forceinline as it's attribute to force a block to be inlined.
I will almost guarantee you that your code using the MACRO will be slower than what the compiler works out by you just writing a normal C/C++ function and letting the optimizer do it's thing
It seems to get lost on the net that the modern compilers are a far cry from the old ones. You get better results by trying to help the optimizer rather than fighting it.
In vino veritas
|
|
|
|
|
Are you saying VC++ is smart enough to see what is repeatative code, and will inline it
Thanks
|
|
|
|
|
Yep it should pick it just dump the code output to check, manually turn on the optimization flags to make sure it's on (the default will be "default" which is real helpful)
If you are on VisualStudio time to learn how to use the analyzer menu items
Anyhow the oldschool way to do it on VS is set a breakpoint where you want to look at. When it breaks press CTRL+ALT+D (brings up disassembly window) now see if it inlined the code in the assembler/C/C++ code mix on your screen!!
General background on VS optimizer: Introducing a new, advanced Visual C++ code optimizer | Visual C++ Team Blog[^]
General background on IPO:Interprocedural optimization - Wikipedia[^]
VisualStudio -GL (Whole Program Optimization)[^](Inline a function in a module even when the function is defined in another module)
In vino veritas
modified 19-Jun-17 11:45am.
|
|
|
|
|
Thanks
BTW I had bought the Intel C/C++ compiler mainly for the reason that it allows inline assembler
in X64 bit code. Once installed it is integrated into Visual Studio
I was under the assumption that since they own the hardware their compiler will generate
Faster code to my amazement this was not the case
|
|
|
|
|
As per discussion C/C++ Compilers are made up of many parts the one you are most discussing at the moment is optimization. Intel may have better understanding of the opcodes and have a wider range of opcodes in the compiler but that has very little to do with optimization which is more closer related to pattern recognition.
I can write C code for speed on the Intel compiler that VisuaStudio can't come near because it doesn't have the opcode set but the moment I drift away to higher abstraction code VisualStudio will out perform it. So it's a very code dependent thing and you need to be careful because this is all generalization from my personal experience. You could well get very different results.
In vino veritas
modified 19-Jun-17 15:46pm.
|
|
|
|
|
Well,
You tell me that it does not compile, you don't say what errors are thrown up.
Your concept of a Macro makes your code often easier to read and follow (Very Important for Maintenance) However, a Macro can be very tricky to write. For one, the Debugger cannot get a grip on it. That is because a macro is resolved in the Pre-Compiler, at a Text Translation Level, essentially before the C(PP) compiler gets a grip on your code. It does no longer see your macro code, so, you get obscure errors if you don't have the macro perfectly correct, and, not where the Macro is declared, but where you implement it.
The Book by Kernigan and Ritchie: 'The C Language' explains the process concisely.
I often start with writing a complicated Macro as a Function, debug it, and strip it out into a macro.
Do Not forget the trivial things, e.g. a Line Break ( '\') as the VERY LAST char on a line in the file, before the Newline. (if spaces follow the '\' before the NewLine, it will not be read correctly).
There are horses for courses. It has mainly to do with readability and convenience. Modern Compilers are very smart on optimisation, and will nearly always isolate repeated code into a single function in the retail version. (It typically does not do such optimisation in the Debug version) Space was a literally crunching issue in 16 bit programming, a nearly unimportant issue in 32 bit, and, (for now) essentially a non issue in 64 bit.
Now, sometimes such optimisation is specifically not wanted. You can get out of this by declaring a variable or function as 'volatile', in which case the compiler lays out the code exactly as you wrote it.
Regards,
Bram van Kampen
|
|
|
|
|
I have had very bad experiences with VC++ optimizer while trying to add to a pointer I have found the infamous optimizer not produce any code finding this out the hard way while debugging and going into disassembly mode
With the only resolution to turn it off via #pragma optimize("",off)
|
|
|
|
|
Well,
The Optimiser has always worked well for me over the last 20 years. When in doubt, you can always switch all optimisation off.
I remember the DOS days of more than 30 years ago where we wrote in Assembly, and we had a Friday afternoon party for someone saving 100 bytes or more in the final target, by writing smarter code by hand. These days are thankfully gone.
Optimisation is typically switched off for Debug Builds. If you could debug your code at all, even using assembly, it is still unlikely that the optimisation has anything to do with it. If the optimizer had a bug in it, there would soon be a worldwide outcry.
The likelihood is that you wrote suspect code. I cannot judge that until I see the offending code.
Now, rest assured, and learn this from an old timer. I have often banged my head against the wall about coding issues. Blamed the Compiler, Blamed Windows, etc. In the end, on calm review and reflection, it has always turned out to be a 'misconception' somewhere in the process, solely conceived on my behalf.
If your code works in Debug, but not Retail mode, there are several articles on the internet, about 'Surviving the Release Mode'
Regards and Sympathy,
Bram van Kampen
|
|
|
|
|
Hi,
I have ended up in somewhat of a DLL Hell of my own making. In order to resolve this, I have started to write a Tool with view to provide dumps of Imports and Exports. A good starting point was given by Matt Pietreck, in his file PEIMX.C. It works as a Comandline tool, which is not realy convenient. I wrote a simple wrapper in the form of a Dlg based App, where I can specify the Source and Target Files. No problem with that at all. Worked a Dream for Imports, however, export functions are more elusive. Matt makes me look for a section marked '.edata' however, no such section appears to exist in any dll that I can find.
I have the following sections: .text .rdata .data .idata and .reloc I have opened up kernel32.dll, and built my own Test.dll, to no avail,no section marked '.edata'.
Well Bram, I can hear you all say:
"Goto <winnt.h>, where you will somwhere past halfway down, the following:-
IMAGE_DIRECTORY_ENTRY_EXPORT, and IMAGE_OPTIONAL_HEADER.DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES].
Go Back to your PE Header, do your arithmetic (Apply a Delta) to compensate for mapping the file differently than that 'LoadLibrary()' does (same as Matt does) and get it that way."
Tried that too, still to no avail. During the debugging I got the impression that Kernel32 has no exports whatsoever. However, the Imports (on the Second Entry, IMAGE_OPTIONAL_HEADER.DataDirectory[1]) are readily reached with a Delta of 0. (that represents the Import Data)
Matt is an Author of international repute, and a Microsoft MVP. What is going wrong here.
Now, in case anyone comments about 64 bit, this code is written and tested in 32 bit Windows XP. This should not really be a question either, seeing that I can retrieve all Import functions.
Because of the size involved, I have just included two snippets.
Nr1 is how our Matt tried to find named sections. Nr2 is how I try to find the relevant section via
the PE optional Header.
Snippet 1. Written by Matt,
if( pNTH->Signature == IMAGE_NT_SIGNATURE ) {
pSH = ( PIMAGE_SECTION_HEADER ) ( ( DWORD )pNTH +
sizeof( IMAGE_NT_HEADERS ) );
for( i = 0;
i < pNTH->FileHeader.NumberOfSections;
i++ )
{
if( strcmp( pSH[ i ].Name, ".idata" ) == 0 )
{
}
else
if( strcmp( pSH[ i ].Name, ".edata" ) == 0 )
{
}
_getch(); }
else printf("Not a PE-Header");
The second snippet is my modification to look for the section by RVA, using the values in <winnt.h>.
The code is also encapsulated in a class, so that we can analyse and compare a large amount of dll's.
<pre>
PIMAGE_DATA_DIRECTORY pDataDirectory=m_pNTH->OptionalHeader.DataDirectory;
PIMAGE_DATA_DIRECTORY pImportData=pDataDirectory+IMAGE_DIRECTORY_ENTRY_IMPORT;
PIMAGE_DATA_DIRECTORY pExportData=pDataDirectory+IMAGE_DIRECTORY_ENTRY_EXPORT;
m_pImportDirectory=m_pExportDirectory=m_pResourceDirectory=NULL;
int Offset=m_pDH->e_lfanew+sizeof(IMAGE_NT_HEADERS);
PIMAGE_SECTION_HEADER pBase=(PIMAGE_SECTION_HEADER)(m_pBuffer+Offset);
int i;
for( i = 0;
i < m_pNTH->FileHeader.NumberOfSections;
i++ )
{
CString SectionName=m_pSH[i].Name;// Just for Debugging, to see where we are.
if(m_pSH[ i ].VirtualAddress==0)continue;
if(m_pSH[ i ].VirtualAddress==pImportData->VirtualAddress){
if(m_pImportDirectory!=NULL){
m_sErrorString="Not a Valid Executable File: Contains More than One Import Directories";
m_nErrNo=-1;
return false;
}
m_pImportDirectory=m_pSH+i;
continue;
}
if(m_pSH[ i ].VirtualAddress==pExportData->VirtualAddress){
if(m_pExportDirectory!=NULL){
m_sErrorString="Not a Valid Executable File: Contains More than One Export Directories";
m_nErrNo=-1;
return false;
}
m_pExportDirectory=m_pSH+i;
continue;
}
}
Apologies if this is badly formatted. My editor appears to put in tags at will, which do not appear in the 'Edit' window. (So, I cannot remove them).
Anyways, you can get the gist.
Anyone any idea of what is happening here?
Regards.
Bram van Kampen
|
|
|
|
|
There is no .edata section it has not been around since it was dropped a long time ago .... Old article????
In vino veritas
|
|
|
|
|
Thanks Leon,
Well, it is old indeed. The disc came in a book I bought in 1998. Having said that, for compatibility reasons, one does not expect Microsoft to change very much in one of the pillars of the NT technology, the format of the PE File. Seeing that Windows 10 still executes applications with the '.edata' section present, the export data in the newer version must be stored at :
DataDirectory[0].VirtualAddress
(It is what I presume that 'GetProcAddress()' uses).
This must then be an RVA, Otherwise Old Applications would break, which is something I have not seen happening. Thus, the export data is no longer afforded a section, but is stored at a location following the PE header.
Would you per chance have a link to an article where I can get more information.
Thanks Again
Bram van Kampen
|
|
|
|
|
Bram van Kampen wrote: "Goto <winnt.h>, where you will somwhere past halfway down, the following:-
IMAGE_DIRECTORY_ENTRY_EXPORT, and IMAGE_OPTIONAL_HEADER.DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES].
Go Back to your PE Header, do your arithmetic (Apply a Delta) to compensate for mapping the file differently than that 'LoadLibrary()' does (same as Matt does) and get it that way." Yes, basically that. Looking for section names is not that useful, there are some conventions but in the end the name is meaningless. To make matters worse, the current convention is putting the import and export directories in .rdata.
Anyway something that catches my attention is that this:
m_pSH[ i ].VirtualAddress==pExportData->VirtualAddress
Is wrong, because the export directory does not have to be at the start of a section, it could be (and typically is) somewhere in the middle of a section. So you should test whether the section contains the virtual address of the export directory.
|
|
|
|
|
Thanks Harrold,
Well,
m_pSH[ i ].VirtualAddress==pExportData->VirtualAddress
seems to work well with resources and import data. That does not mean that it is correct of course.
Good to know that there are still people about like me who are interested in the nuts and bolts. I see too many 'IT Engineers' with high degrees in C# and other synthetic languages, who are neither interested, nor have a notion of how the machine ultimately works.
By the way, would you have a link to an article about these latest versions of PE Files. I will also, in time be interested in viewing and displaying Resources, and into drilling into Obj en Lib Files. Matt Pietreck left that always as a minor issue, stating that that was all organised in 'Much the same way'. The devil is in the Detail in these things.
Thanks for your contribution
Bram van Kampen
|
|
|
|
|
Normally it would work for resources, because they're often their own section .rsrc, though they could be anywhere (even outside of any section).
Bram van Kampen wrote: By the way, would you have a link to an article about these latest versions of PE Files. That's not really going to work, because the problem is not so much a version difference, but some made-up non-binding conventions - apparently they changed, they're also not exactly consistent across different linkers. I don't think that's really important though, it was always the case that the only reliable thing is following the RVAs from the data directory list in the NT header. Section names don't mean anything, in fact stuff can be outside of any section, then there isn't even a name..
There's something else I can link to that may be interesting, namely this list of weird things you can do with PE files[^], it's pretty wild what you can get away with in the PE format.
|
|
|
|
|
Hi,
Well I agree that section names are meaningless. and that different Compilers and Linkers have their own ways and means.
However, there must be documentation about what
Kernel32::LoadLibrary(...}; and
Kernel32::GetProcAddress(...) expect a PE File to look like, and what it natively expects. It is that sort of documentation that I am after.
The Section Table still serves a useful purpose. The PE File is not a memory image of the loaded executable. Trivial areas, such as the BSS, are typically left out of the File, but included in the memory image. The Section table informs the loader where to load each section, irrespective of the Name. The User (Program Writer) may also include Zero Set named sections of interest, for instance an unlimited number of named data sections which are shared between instances (Ouch..., but apparently Allowed). After this loading the Data Directory List points indeed to the correct RVA for each item. The thing is here too, that if something is allowed by the specification, however daft, some one some where in the world may just try that at some time.
So, in essence when we get an RVA from the data directory, it appears that we have to decide whether the RVA points into a section,(in which case we need an adjustment to compensate for the loading position vs file position) or, it is an RVA into the File. To muddy the waters further, we may have absolute or relative addressing in a File. In the former case, a relocation may be applied to the RVA. To muddy it further again, DllMain() may modify a lot of daft things.
I will probably end up using LoadLibrary() to dig deeper, but, at least as a first sanity check, I need to load the file manually, if for no other reason as to investigate why for instance LoadLibrary() fails on a PE File.
Afterall, the purpose of the tool I'm trying to write is not to show that everything is working perfectly, it is to provide a rich environment in which to take things apart to get to the bottom of a problem.
Friendly thanks for your reply,
Bram van Kampen
|
|
|
|
|
Here's some documentation from microsoft: http://go.microsoft.com/fwlink/p/?linkid=84140
But it doesn't really go into the corner cases. It's more focused on documenting how they think the PE format should be used than on documenting just what sort of insanity is actually accepted by the loader (which of course varies per version of windows). As far as I know MS doesn't even document that, I've only seen it in places such as corkami's github and places that talk about analysis of malware. For example, sections can actually overlap each other in virtual space (wat), with sections that are later in the section table apparently just overwriting the mapping created for an earlier section that extends further than where the later section begins - MS does not even seem to acknowledge that such a thing is possible.
Here's an other description of the PE format by corkami, including a lot of useful practical notes (or gory details..) and references to the POCs in the list I linked before: docs/PE.md at master · corkami/docs · GitHub
|
|
|
|
|
|
Well Richard,
Thanks for the links. However, it leads either to Old Documentation (1999), or CE formats.
I have the Old Formats already, via the books of Matt Pietrek. Other persons have also contributed, and I have now written a suite of functions that extract imports and exports. The next step is to extract and show resources. Matt Pietrek found that too trivial an issue to pass any remarks on. I suppose it wil take a bit more hard slogging.
Regards
|
|
|
|
|
|