Click here to Skip to main content
15,867,686 members
Articles / Programming Languages / C++

Least Mean Square Algorithm (using C++)

Rate me:
Please Sign up or sign in to vote.
4.75/5 (19 votes)
27 Mar 2016CPOL3 min read 73.9K   1.2K   26   10
Implementing LMS algorithm using C++

Introduction

LMS algorithm is one of the most popular adaptive algorithms because of its simplicity. Computing LMS does not require computing of correlation matrix, or even computing of matrix inversions. Indeed, it is the simplicity of the LMS algorithm that has made it the standard against which other adaptive filtering algorithms are benchmarked.

Overview of the Structure and Operation of the Least Mean Square Algorithm

The least-mean-square (LMS) algorithm is a linear adaptive filtering algorithm that consists of two basic processes:

  1. A filtering process, which involves (a) computing the output of a transversal filter produced by a set of tap inputs, and (b) generating an estimation error by comparing this output to a desired response.
  2. An adaptive process which involves the automatic adjustment of the tap weights of the filter in accordance with the estimation error.

Adaptive structures have been used for different applications in adaptive filtering such as:

  • Noise cancellation
  • System identification
  • Adaptive predictor
  • Equalization
  • Inverse modeling
  • Interference cancellation

In this article, we are going to implement our example in system identification.

System Identification

Figure 1 shows an adaptive filter structure that can be used for system identification or modeling. The same input u is to an unknown system in parallel with an adaptive filter. The error signal e is the difference between the response of the unknown system d and the response of adaptive filter y. The error signal fed back to the adaptive filter and is used to update the adaptive filter’s coefficients until the overall out y = d. When this happens, the adaptation process is finished, and e approaches zero.

Image 1

Formulation

As you see in Figure 1, y (n) is the output of the adaptive filter:

Image 2

Where w<sub>k</sub> (n) represent M weights or coefficients for a specific time n. The convolution equation can be implemented something like this:

C++
for (int i = 0; i < M; i++)
        Y += (W[i] * X[i]);		//calculate filter output

A performance measure is needed to determine how good the filter is. This measure is based on the error signal...

Image 3

... which is the difference between desired signal d (n) and adaptive filters output y (n). The weights or coefficients wk (n) are adjusted such that a mean squared error function is minimized. This mean square error function is E [e<sup>2</sup> (n)], where E represents the expected value. Since there are k weights or coefficients, a gradient of the mean squared error function is required. An estimate can be found instead using the gradient of e<sup>2</sup> (n), yielding

Image 4

That can be implemented like this:

C++
E = D - Y;		//calculate error signal

for (int i = 0; i < M ; i++)
	W[i] = W[i] + (mu * E * X[i]);		//update filter coefficients

LMS Example in Code

We illustrate the following steps for the adaptation process using the adaptive structure in Figure 1:

  1. Generate some random data for LMS filter input.
  2. Assume a system that we are going to estimate it like this: H[M] = { 1, 0.5, 0.25, 0.125, 0.0625 }
  3. Build desired signal by convolving the generated random data and assumed H.
  4. Calculate the adaptive filter output y.
  5. Calculate the error signal
  6. Update each LMS filter weights
  7. Repeat the entire adaptive process for the next output sample point.

Generating Input and Desired Signals:

C++
for (int i = 0; i < I; i++)
		Input[i] = rand() / (float)RAND_MAX;

	for (int i = 0; i < I; i++)
		for (int j = 0; j < M; j++)
			if (i - j >= 0)
				Desired[i] += Input[i - j] * H[j];

All Together

C++
#define mu 0.2f				//convergence rate
#define M 5					//order of filter
#define I 1000				//number of samples

double Input[I] = { 0.0 };
double Desired[I] = { 0.0 };

//double H[M] = { 1, 0.5, 0.25, 0.125, 0.0625 };	//the main system
double H[M] = { 0.0625, 0.125, 0.25, 0.5, 1 };		//we need inverse of main system to convolution


void initialize()
{
	for (int i = 0; i < I; i++)
		Input[i] = rand() / (float)RAND_MAX;

	for (int i = 0; i < I; i++)
	for (int j = 0; j < M; j++)
	if (i - j >= 0)
		Desired[i] += Input[i - j] * H[j];
}

void main()
{
	initialize();

	long T, n = 0;
	double D, Y, E;
	double W[M] = { 0.0 };
	double X[M] = { 0.0 };

	FILE	*Y_out, *error, *weights;

	Y_out = fopen("Y_OUT", "w++");				//file for output samples
	error = fopen("ERROR", "w++");				//file for error samples
	weights = fopen("WEIGHTS", "w++");			//file for weights samples


	for (T = 0; T < I; T++)
	{
			for (int m = T; m > T - M; m--){
				if (m >= 0)
					X[M + (m - T) - 1] = Input[m];	//X new input sample for 
									//LMS filter
				else break;
			}

			D = Desired[T];					//desired signal
			Y = 0;						//filter’output set to zero

			for (int i = 0; i < M; i++)
				Y += (W[i] * X[i]);			//calculate filter output

			E = D - Y;					//calculate error signal

			for (int i = 0; i < M; i++)
				W[i] = W[i] + (mu * E * X[i]);		//update filter coefficients

			fprintf(Y_out, "\n % 10g % 10f", (float)T, Y);
			fprintf(error, "\n % 10g % 10f", (float)T, E);
	}

	for (int i = 0; i < M; i++)
		fprintf(weights, "\n % 10d % 10f", i, W[i]);

	fclose(Y_out);
	fclose(error);
	fclose(weights);
}

References

  • The Adaptive Filters, Simon Haykin
  • DSP Applications Using C and the TMS320C6x DSK

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
CEO SAYAQ
Unknown
Received B.Sc. and M.Sc. degrees in Electrical Engineering (Communication) from University of Tabriz, Tabriz, in 2013 and 2016, respectively. My research interests include digital signal processing, adaptive systems, wireless sensor networks and image processing

Comments and Discussions

 
GeneralMy vote of 3 Pin
Bartlomiej Filipek12-Apr-16 22:04
Bartlomiej Filipek12-Apr-16 22:04 
GeneralRe: My vote of 3 Pin
Rick York18-Oct-17 16:26
mveRick York18-Oct-17 16:26 
Suggestionwrong option in fopen Pin
avisal28-Mar-16 9:37
professionalavisal28-Mar-16 9:37 
GeneralRe: wrong option in fopen Pin
AmirAslan Haghrah28-Mar-16 9:54
AmirAslan Haghrah28-Mar-16 9:54 
QuestionRMS vs LMS Pin
Cryptonite28-Mar-16 6:24
Cryptonite28-Mar-16 6:24 
AnswerRe: RMS vs LMS Pin
AmirAslan Haghrah28-Mar-16 8:56
AmirAslan Haghrah28-Mar-16 8:56 
QuestionRe: RMS vs LMS Pin
Cryptonite28-Mar-16 11:27
Cryptonite28-Mar-16 11:27 
Also, is this line (from your example) carrying out the back propagation (and weight adjustment) of error in the network?

for (int i = 0; i < M; i++) W[i] = W[i] + (mu * E * X[i]); //update filter coefficients

And does this the back propagation method still work about the same regardless of whether or not it's stochastic/deterministic data?

Thank you very much for your time/contributions.
AnswerRe: RMS vs LMS Pin
AmirAslan Haghrah31-Mar-16 5:54
AmirAslan Haghrah31-Mar-16 5:54 
GeneralRe: RMS vs LMS Pin
Cryptonite31-Mar-16 11:03
Cryptonite31-Mar-16 11:03 
GeneralRe: RMS vs LMS Pin
AmirAslan Haghrah3-Apr-16 22:41
AmirAslan Haghrah3-Apr-16 22:41 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.