Click here to Skip to main content
15,867,308 members
Articles / Multimedia / OpenGL

Getting Started with Volume Rendering using OpenGL

Rate me:
Please Sign up or sign in to vote.
4.98/5 (66 votes)
21 Jan 2021CPOL11 min read 268.2K   17.7K   121   91
Step by step explanation of 3D image rendering using OpenGL
This article demonstrates texture based rendering. It starts off with the 2D texture approach and explains the issues if we use it, and ends with the 3D texture technique.

Image 1

Introduction

I was a Win32 application developer and one fine day, I was asked to work in a volume rendering project. I started learning OpenGL, but learning volume rendering was difficult. The reason is that for volume rendering, you get a lot of theories to read but nothing related to a working code explaining why. This article is my effort to fill that gap.

This will give a step by step explanation of the basic concepts of volume rendering techniques. There are different types of volume rendering techniques like Raycasting and texture based rendering. This article demonstrates texture based rendering. It starts off with the 2D texture approach and explains the issues if we use it, and ends with the 3D texture technique. Though I'm using OpenGL to explain, one can easily create it in DirectX as the concept is the same.

There are a lot of techniques (like Depth mixing, MPR, LUT, transfer function, cropping, rendering using shader, etc.) that are used along with a volume rendering project. In fact, those make a volume rendering project complete. To keep this short, I am planning to include those in another article.

Background

A basic knowledge of OpenGL is needed to follow the article as I am not deeply explaining each of the OpenGL APIs. The data attached along with the article doesn't have any headers which makes it easy to work with. The dimension (256x256x109 8 bit) information is hard coded in the application. We can get more test data from here. But that has to be converted to raw data.

The sample application uses 3D textures. So to run this application, the machine should have support for 3D textures. Which means an OpenGL version 1.2 or greater.

We can get the glew library from here. GIF images in this article are created using this application.

Using the Application

A file open dialog will be shown at the start up of the application. We have to select the sample data. We can rotate the image using mouse movements if the left button is pressed.

Using the Code

The attached source will only contain the 3D texture based approach.

  • CRawDataProcessor - Responsible for reading the data from the file and converting it to texture
  • CTranformationMgr - Handles the transformation and keeps it in a matrix
  • CRendererHelper - Does the OpenGL initialization and volume rendering

What is this Raw Data?

Raw data is nothing but continuous 2D frames. Below is a snapshot of the slices obtained by opening the attached raw data. These 2D frames will be from different positions of the Z axis.

Image 2

Setting Up OpenGL

There is nothing special that we are doing here other than the usual steps to initialize OpenGL. I am calling the Initialize function from Dialog::OnCreate to initialize. The Resize function handles the setting up of the Ortho projection. As dialog::OnSize will be called in the start up and resize, OnSize is the place where we can set up the ortho. Render() will be called from OneraseBackground() to draw the scene. Glew.h and glew32.lib (along with a supporting graphics card) are needed when we move to 3D textures. This is because the OpenGL version shipped with Windows doesn’t support specifications greater than version 1.1.

As I mentioned, we are using Orthogonal projection with the following values:

  • Left = -1
  • Right = +1
  • Top = +1
  • Bottom = -1
  • Near = -1
  • Far = +1

The above values may slightly change in the Resize function as we are keeping the aspect ratio. Check the resize code. However, we don’t have to bother about the aspect ratio inside the rendering code.

Ortho

C++
bool CRendererHelper::Initialize( HDC hContext_i )
{
    //Setting up the dialog to support the OpenGL.
    PIXELFORMATDESCRIPTOR stPixelFormatDescriptor;
    memset( &stPixelFormatDescriptor, 0, sizeof( PIXELFORMATDESCRIPTOR ));
    stPixelFormatDescriptor.nSize = sizeof( PIXELFORMATDESCRIPTOR );
    stPixelFormatDescriptor.nVersion = 1;
    stPixelFormatDescriptor.dwFlags = 
      PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW ;
    stPixelFormatDescriptor.iPixelType = PFD_TYPE_RGBA;
    stPixelFormatDescriptor.cColorBits = 24;
    stPixelFormatDescriptor.cDepthBits = 32;
    stPixelFormatDescriptor.cStencilBits = 8;
    stPixelFormatDescriptor.iLayerType = PFD_MAIN_PLANE ;
    int nPixelFormat = ChoosePixelFormat( hContext_i, 
         &stPixelFormatDescriptor ); //Collect the pixel format.

    if( nPixelFormat == 0 )
    {
        AfxMessageBox( _T( "Error while Choosing Pixel format" ));
        return false;
    }
    //Set the pixel format to the current dialog.
    if( !SetPixelFormat( hContext_i, nPixelFormat, &stPixelFormatDescriptor ))
    {
        AfxMessageBox( _T( "Error while setting pixel format" ));
        return false;
    }

    //Create a device context.
    m_hglContext = wglCreateContext( hContext_i );
    if( !m_hglContext )
    {
        AfxMessageBox( _T( "Rendering Context Creation Failed" ));
        return false;
    }
    //Make the created device context as the current device context.
    BOOL bResult = wglMakeCurrent( hContext_i, m_hglContext );
    if( !bResult )
    {
        AfxMessageBox( _T( "wglMakeCurrent Failed" ));
        return false;
    }
    glClearColor( 0.0f,0.0f, 0.0f, 0.0f );
    glewInit(); // For 3D texture support  
    if(GL_TRUE != glewGetExtension("GL_EXT_texture3D"))
    {
        AfxMessageBox( _T( "3D texture is not supported !" ));
        return false;
    }
    return true;
} 

void CRendererHelper::Resize( int nWidth_i, int nHeight_i )
{
     //Find the aspect ratio of the window.
     GLdouble AspectRatio = ( GLdouble )(nWidth_i) / ( GLdouble )(nHeight_i ); 
     //glViewport( 0, 0, cx , cy );
     glViewport( 0, 0, nWidth_i, nHeight_i );
     glMatrixMode( GL_PROJECTION );
     glLoadIdentity();

     //Set the orthographic projection.
     if( nWidth_i <= nHeight_i )
     {
         glOrtho( -dOrthoSize, dOrthoSize, -( dOrthoSize / AspectRatio ) ,
             dOrthoSize / AspectRatio, 2.0f*-dOrthoSize, 2.0f*dOrthoSize );
     }
     else
     {
         glOrtho( -dOrthoSize * AspectRatio, dOrthoSize * AspectRatio, 
             -dOrthoSize, dOrthoSize, 2.0f*-dOrthoSize, 2.0f*dOrthoSize );
     }

     glMatrixMode( GL_MODELVIEW );
     glLoadIdentity();
}

2D Texture Based Approach

What is Texture?

To those who don’t know what a texture is, think like this, texture is a structure like BITMAP. And there are some functions provided to manipulate a structure.

The main steps involved in using a texture are:

  1. Generate the texture using glGenTextures.
  2. Bind the texture and set the texture behavior for zoom in and zoom out, as well as for out of bound conditions (texture coordinates beyond 0-1).
  3. Load the data to the texture.
  4. While drawing the vertex, specify the texture coordinates that have to be mapped.

Once the data is loaded into the texture, the actual dimension is no longer needed because the texture coordinates are always specified by 0-1. Whatever may be the size of the texture, it is 1. The advantage is that we just have to map the coordinates and OpenGL will take care of all the scaling needed.

glTexImage2D is the function used to load data to the two dimensional texture. The last parameter to the function is the buffer which holds the data. Below is what happens at the time of glTexImage2D.

Image 4

I am creating 109 (number of 2D slices of the attached sample data) 2D textures to store each of these frames. Here is the code to read the file and create the textures. A volume image is nothing but 2D frames stacked in Z direction. So let us arrange those 2D textures. Like this:

  • InitTextures2D() creates as many textures as the number of slices. And loads each texture with each slice.
  • Draw() arranges the textures in the z axis.
C++
bool InitTextures2D( LPCTSTR lpFileName_i )
{
    CFile Medfile;
    if( !Medfile.Open(lpFileName_i ,CFile::modeRead ))
    {
        AfxMessageBox( _T( "Failed to read the raw data" ));
        return false;
    }

    // File has only image data. The dimension of the data should be known.
    m_uImageCount = 109;
    m_uImageWidth = 256;
    m_uImageHeight = 256;

    // Holds the texture IDs.
    m_puTextureIDs = new int[m_uImageCount] ;

    // Holds the luminance buffer
    char* chBuffer = new char[ 256 * 256 ];
    glGenTextures(m_uImageCount,(GLuint*)m_puTextureIDs );

    // Read each frames and construct the texture
    for( int nIndx = 0; nIndx < m_uImageCount; ++nIndx )
    {
        // Read the frame
        Medfile.Read(chBuffer, m_uImageWidth*m_uImageHeight);

        // Set the properties of the texture.
        glBindTexture( GL_TEXTURE_2D, m_puTextureIDs[nIndx] );
        glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, m_uImageWidth, m_uImageHeight , 0,
            GL_LUMINANCE, GL_UNSIGNED_BYTE,(GLvoid *) chBuffer);
        glBindTexture( GL_TEXTURE_2D, 0 );
    }

   // glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
    delete[] chBuffer;
    return true;
 }  

Image 5

C++
void Render()
{
    float fFrameCount = m_uImageCount;
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
    glEnable(GL_DEPTH_TEST);glMatrixMode(GL_MODELVIEW):
    glLoadIdentity();
    glEnable(GL_TEXTURE_2D);
    for(int nIndx=0; nIndx <m_uImageCount;++nIndx)
    {
        glBindTexture(GL_TEXTURE_2D,m_puTextureIds[nIndx]);
        glBegin(GL_QUADS);
        MAP_2DTEXT(nIndx);
        glEnd();
        glBindTexture(GL_TEXTURE_2D,0);
    }
}

What Does the MAP_2DTEXT Macro Do?

C++
#define MAP_2DTEXT( TexIndex ) \

glTexCoord2f(0.0f, 0.0f);  \
glVertex3f(-dViewPortSize,-dViewPortSize,(TexIndex *2*dViewPortSize/fFrameCount)-1.0f);\
glTexCoord2f(1.0f, 0.0f); \
glVertex3f(dViewPortSize,-dViewPortSize,(TexIndex *2*dViewPortSize/fFrameCount)-1.0f);\
glTexCoord2f(1.0f, 1.0f); \
glVertex3f(dViewPortSize,dViewPortSize,(TexIndex *2*dViewPortSize/fFrameCount)-1.0f);\
glTexCoord2f(0.0f, 1.0f); \
glVertex3f(-dViewPortSize,dViewPortSize,(TexIndex *2*dViewPortSize/fFrameCount)-1.0f); 

What does the above code do? It simply specifies the texture coordinate and the corresponding vertex of the quad. The texture coordinate (0,0) is mapped to the vertex (-1,-1) and the texture coordinate (1,1) mapped to the vertex(1,1). (0,1) gets mapped to (-1,1) and (1,0) to (1,-1). This is repeated for each quad that is drawn in the z axis. See in the for loop, the texture that we are binding every time is different. As the for loop is from 0-109, the z position in the vertex needs to have some conversion to make it range from (-1) to (+1).

Image 6

Are we seeing anything? Yea that’s the first slice.

Image 7

This is because; we started arranging the frames from -1 to +1 in Z axis and due to the depth test property, we are seeing what is near to us (see the ortho).

Applying Alpha

The next step is to remove those black colors from our frames, isn’t it? In OpenGL, there is a feature called Alpha test. That means, we can specify an alpha value and alpha criteria.

C++
glEnable( GL_ALPHA_TEST );
glAlphaFunc( GL_GREATER, 0.05f );

This means OpenGL will draw the pixel only if its alpha value is greater than 0.05f. But our data doesn’t have any alpha value. Let’s revisit the texture creation and add the alpha value.

C++
for( int nIndx = 0; nIndx < m_uImageCount; ++nIndx )
{
    // Read the frame
    Medfile.Read(chBuffer, m_uImageWidth*m_uImageHeight);

    // Set the properties of the texture.
    glBindTexture( GL_TEXTURE_2D, m_puTextureIDs[nIndx] );
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

    // Convert the data to RGBA data.
    // Here we are simply putting the same value to R, G, B and A channels.
    // Usually for raw data, the alpha value will
    // be constructed by a threshold value given by the user 

    for( int nIndx = 0; nIndx < m_uImageWidth*m_uImageHeight; ++nIndx )
    {
        chRGBABuffer[nIndx*4] = chBuffer[nIndx];
        chRGBABuffer[nIndx*4+1] = chBuffer[nIndx];
        chRGBABuffer[nIndx*4+2] = chBuffer[nIndx];
        chRGBABuffer[nIndx*4+3] = 255;
        if( chBuffer[nIndx] < 20 )
        {
            chRGBABuffer[nIndx*4+3] = 0;
        }
    }

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_uImageWidth, m_uImageHeight , 0,
        GL_RGBA, GL_UNSIGNED_BYTE,(GLvoid *) chRGBABuffer );
    glBindTexture( GL_TEXTURE_2D, 0 );
}

void Render()
{
    float fFrameCount = m_uImageCount;
    glClear( GL_COLOR_BUFFER_BIT  | GL_DEPTH_BUFFER_BIT );

    glEnable( GL_ALPHA_TEST );
    glAlphaFunc( GL_GREATER, 0.5f );

    glEnable( GL_DEPTH_TEST );

    glMatrixMode( GL_MODELVIEW );
    glLoadIdentity();

    glEnable(GL_TEXTURE_2D);

    for ( int nIndx = 0; nIndx < m_uImageCount; nIndx++ )
    {
        glBindTexture( GL_TEXTURE_2D,  m_puTextureIDs[nIndx]);
        glBegin(GL_QUADS);
            MAP_2DTEXT( nIndx );
        glEnd();
        glBindTexture( GL_TEXTURE_2D, 0 );
    }
}

We can check the pixel value and set its alpha value as 0 (fully transparent) and for all other pixel values it can be set as 255. This temporary buffer is supplied to the texture. Check the internal property of the texture and the type of the buffer in the glTexImage2D is changed. Render again.

Image 8

If we have different levels of alpha, then we can play with glAlphaFunc giving different values as the alpha criteria. So for that, we are simply making the alpha data same as the luminance data here. This means a pixel with luminance value 0 will have its alpha as 0, a pixel with 255 will get alpha as 255, and a pixel with a value in between will get the alpha in between.

Now if we try increasing the value of the second parameter in glAlphaFunc, we can see something like this:

Image 9

Apply Blend

Even though we have managed to get an overview of the data, it still doesn't look good. When we view an object in the real world, the rays will pass through the object according to its transparency and we will get a mix of colors till that. How do we achieve this? Remember Blending in OpenGL? Yeah, we can enable blending and draw each frame.

Remember to remove the depth test.

C++
void CRendererHelper::Render()
{
    float fFrameCount = m_uImageCount;
    glClear( GL_COLOR_BUFFER_BIT  | GL_DEPTH_BUFFER_BIT );

    glEnable(GL_BLEND);
    glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );

    glMatrixMode( GL_MODELVIEW );
    glLoadIdentity();
    glRotated( mfRotation, 0, 1.0,0 );

    glEnable(GL_TEXTURE_2D);

    for ( int nIndx = 0; nIndx < m_uImageCount; nIndx++ )
    {
        glBindTexture( GL_TEXTURE_2D,  m_puTextureIDs[nIndx]);
        glBegin(GL_QUADS);
            MAP_2DTEXT( nIndx );
        glEnd();
        glBindTexture( GL_TEXTURE_2D, 0 );
    }
} 

Now this looks pretty good, isn't it?

Image 10

Let's Do Some Transformations on the Image Now

Rotation is simple with OpenGL. Use glRotate.

C++
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glRotated( mfRotation, 0, 1.0,0 );

Did you notice some issues here while rotating?

Image 11

Rotation is Not Proper after 90 Degrees

An 180 degree image looks exactly like the horizontal flipped image of the image without any rotation. It was okay if we were drawing only a 2 D surface, but for 3D data, this is wrong.

Why this behavior? We are mapping the textures to the quad, and rotation is applied to the quad. We are using blending to see the data and there is no depth test.

Let us consider a case where there are no rotations. We are drawing 109 quads in order. The quad drawn at -1 of the z axis maps the 0texture. This is first drawn on the frame buffer. Then on top of this, the texture at 1st index and so on. As there is no depth test enabled, whatever is drawn gets blended.

Image 12

Consider an 180 degree rotation.

The drawing and mapping code is same. But we are applying rotation. Now when the first quad is drawn at z axis -1, the rotation gets applied and it goes rotating and will be positioned at the +1 axis and it will be horizontally flipped. The texture at 1st index will go to the positive counter part of the axis flipped (imagine looking at an image from the other side of the paper). Like this, all the textures get flipped and blended resulting in a horizontally flipped image.

Image 13

Screen Goes Blank When It Comes 90 Degree

As it rotates reaching 90 degree, the image gradually loses details and goes blank at 90 degrees. Same for 270 degrees. See the images.

Why is this Happening?

As we are rotating the quads, we are not getting enough samples to blend whenever it gets rotated. As it reaches 90 degrees, all the quads we are drawing become parallel to our eyes. In other words, we are not drawing anything in the Y-Z plane. So when the X-Y plane rotates 90 degrees, there is nothing we can see as OpenGL doesn’t draw the edge of the quad.

To get the actual image at 180 degree rotation, we need to have the texture at the other end drawn first.

How Do We Fix This?

Texture Rotations

OpenGL supports a matrix called texture matrix. We can apply transformations in the texture matrix, and it will get affected during texture mapping. But with 2D textures, it is of no use.

Move to 3D Textures

Image 14

3D textures have the advantage of z axis. Like the width and height gets normalized to 1 in a 2D texture, the number of slices or the z axis also gets normalized to 1 in a 3D texture. How will this resolve our transformation with 2D textures?

Let us change the quad mapping code and transformations.

C++
#define MAP_3DTEXT( TexIndex ) \
glTexCoord3f(0.0f, 0.0f, ((float)TexIndex+1.0f)/2.0f);  \
glVertex3f(-dViewPortSize,-dViewPortSize,TexIndex);\
glTexCoord3f(1.0f, 0.0f, ((float)TexIndex+1.0f)/2.0f);  \
glVertex3f(dViewPortSize,-dViewPortSize,TexIndex);\
glTexCoord3f(1.0f, 1.0f, ((float)TexIndex+1.0f)/2.0f);  \
glVertex3f(dViewPortSize,dViewPortSize,TexIndex);\
glTexCoord3f(0.0f, 1.0f, ((float)TexIndex+1.0f)/2.0f);  \
glVertex3f(-dViewPortSize,dViewPortSize,TexIndex);

bool CRendererHelper::InitTextures3D( LPCTSTR lpFileName_i )
{
    CFile Medfile;
    if( !Medfile.Open(lpFileName_i ,CFile::modeRead ))
    {
        AfxMessageBox( _T( "Failed to read the raw data" ));
        return false;
    }

    // File has only image data. The dimension of the data should be known.
    m_uImageCount = 109;
    m_uImageWidth = 256;
    m_uImageHeight = 256;

    // Holds the texture IDs.
    m_puTextureIDs = new int[m_uImageCount] ;

    // Holds the luminance buffer
    char* chBuffer = new char[ m_uImageWidth*m_uImageHeight *m_uImageCou ];
    // Holds the RGBA buffer
    char* chRGBABuffer = new char[ m_uImageWidth*m_uImageHeight*m_uImageHeight*4 ];

    glGenTextures(1,(GLuint*)&mu3DTex );

     Medfile.Read(chBuffer, m_uImageWidth*m_uImageHeight*m_uImageCount);

      glBindTexture( GL_TEXTURE_3D, mu3DTex );
        glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

        // Convert the data to RGBA data.
        // Here we are simply putting the same value to R, G, B and A channels.
        // Usually for raw data, the alpha value
        // will be constructed by a threshold value given by the user 

        for( int nIndx = 0; nIndx < m_uImageWidth*m_uImageHeight*m_uImageCount; ++nIndx )
        {
            chRGBABuffer[nIndx*4] = chBuffer[nIndx];
            chRGBABuffer[nIndx*4+1] = chBuffer[nIndx];
            chRGBABuffer[nIndx*4+2] = chBuffer[nIndx];
            chRGBABuffer[nIndx*4+3] = chBuffer[nIndx];
        }

        glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 
            m_uImageWidth, m_uImageHeight , m_uImageCount, 0,
            GL_RGBA, GL_UNSIGNED_BYTE,(GLvoid *) chRGBABuffer );
        glBindTexture( GL_TEXTURE_3D, 0 );
    delete[] chBuffer;
    delete[] chRGBABuffer;
    return true;
}

void CRendererHelper::Render()
{
    float fFrameCount = m_uImageCount;
    glClear( GL_COLOR_BUFFER_BIT  | GL_DEPTH_BUFFER_BIT );

    glEnable( GL_ALPHA_TEST );
    glAlphaFunc( GL_GREATER, 0.03f );

    glEnable(GL_BLEND);
    glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );

    glMatrixMode( GL_TEXTURE );
    glLoadIdentity();
    // Translate and make 0.5f as the center 
    // (texture co ordinate is from 0 to 1.
    // so center of rotation has to be 0.5f)
    glTranslatef( 0.5f, 0.5f, 0.5f );
    glRotated( mfRotation, 0, 1.0,0 );
    glTranslatef( -0.5f,-0.5f, -0.5f );

    glEnable(GL_TEXTURE_3D);
    glBindTexture( GL_TEXTURE_3D,  mu3DTex );

    for ( float fIndx = -1.0f; fIndx <= 1.0f; fIndx+=0.003f )
    {
        glBegin(GL_QUADS);
            MAP_3DTEXT( fIndx );
        glEnd();
    }
}

Now the rotation is applied to the texture matrix. And quad is mapped with the corresponding z axis position of the 3D textures. So when we apply rotation, the quad is never transformed. The quad gets drawn in the XY plane all the time. The 2D plane of the texture it maps gets rotated. So for a 180 degree rotation, the quad drawn at the -1 Z axis maps the texture coordinate 1 in the texture z axis. As the 109 planes are drawn parallel, there will always be 109 pixels to be blended in any direction. Irrespective of the number of slices, we can draw more planes and map the corresponding z position of the 3D texture because of the interpolation property of the texture.

When we use texture rotation, we should use the texture property to be GL_CLAMP_TO_BORDER to get a correct image. Because when the texture coordinate gets rotated, texture coordinates may even be beyond 0-1. We don’t want to emit data when the coordinate is beyond that.

Image 15

We have 256 pixels in the x axis, 256 in the y axis, and 109 in the z axis. See what happens when we rotate the image in y direction for 90 degrees. Now it is the z side being shown as the x axis. We are mapping the values from -1 to +1 in all the axes. So the 109 pixel is now mapped to -1 to +1 in axis and we can see that that side is looking bigger compared to the other side. We should scale that to get the correct size. Usually for the actual data, we will get the pixel per mm value in x, y, and z so that we can scale accordingly. But here it's just raw data. So I am using the dimension itself for scaling. Here, I have made the width to be mapped to -1 - +1 and all other sides are scaled according to the width.

Also, I have applied a negative scale in the y axis to avoid the flip which we get because of gltexImage3D (see the gltexImage2D data transfer). This flip can be achieved by mapping the texture upside down also.

C++
glMatrixMode( GL_TEXTURE );
glLoadIdentity();

// Translate and make 0.5f as the center 
// (texture co ordinate is from 0 to 1. so center of rotation has to be 0.5f)
glTranslatef( 0.5f, 0.5f, 0.5f );

// A scaling applied to normalize the axis 
// (Usually the number of slices will be less so if this is not - 
// normalized then the z axis will look bulky)
// Flipping of the y axis is done by giving a negative value in y axis.
// This can be achieved either by changing the y co ordinates in -
// texture mapping or by negative scaling of y axis
glScaled( (float)m_pRawDataProc->GetWidth()/(float)m_pRawDataProc->GetWidth(), 
    -1.0f*(float)m_pRawDataProc->GetWidth()/(float)(float)m_pRawDataProc->GetHeight(), 
    (float)m_pRawDataProc->GetWidth()/(float)m_pRawDataProc->GetDepth());

// Apply the user provided transformations
glMultMatrixd( m_pTransformMgr->GetMatrix());

glTranslatef( -0.5f,-0.5f, -0.5f );  

Image 16

Points of Interest

If you want to play with data of different dimension and type (LUMINANCE 16 etc.), then changes have to be made in CVolumeRenderingDlg::OnInitDialog and in CRawDataProcessor::ReadFile. If the dimension is other than the order of 2, then pixel alignment has to be corrected before doing glTexImage3D.

History

  • 26th October, 2012 - Initial version
  • 31st October, 2012 - Updated article
  • 30th June, 2012 - Fixed this mistake in the code. Thanks to Englishclive for pointing it out.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior) Philips
India India
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralRe: Not working for me Pin
Ahmed Muneer28-Aug-13 2:37
Ahmed Muneer28-Aug-13 2:37 
GeneralRe: Not working for me Pin
Ahmed Muneer6-Feb-14 23:58
Ahmed Muneer6-Feb-14 23:58 
GeneralRe: Not working for me Pin
Divin Ookken Athappan5-Mar-14 22:07
Divin Ookken Athappan5-Mar-14 22:07 
GeneralRe: Not working for me Pin
Ahmed Muneer5-Mar-14 22:20
Ahmed Muneer5-Mar-14 22:20 
GeneralMy vote of 5 Pin
DK091-Jul-13 12:21
DK091-Jul-13 12:21 
GeneralRe: My vote of 5 Pin
Divin Ookken Athappan22-Aug-13 20:43
Divin Ookken Athappan22-Aug-13 20:43 
GeneralGood article! Pin
xawari19-Jun-13 10:30
xawari19-Jun-13 10:30 
GeneralRe: Good article! Pin
Divin Ookken Athappan30-Jun-13 1:38
Divin Ookken Athappan30-Jun-13 1:38 
Thanks xawari Smile | :)
GeneralMy vote of 5 Pin
Gun Gun Febrianza4-Jun-13 8:51
Gun Gun Febrianza4-Jun-13 8:51 
GeneralRe: My vote of 5 Pin
Divin Ookken Athappan30-Jun-13 1:39
Divin Ookken Athappan30-Jun-13 1:39 
GeneralMy vote of 5 Pin
Jiří Miklík3-Jun-13 22:46
Jiří Miklík3-Jun-13 22:46 
GeneralRe: My vote of 5 Pin
Divin Ookken Athappan30-Jun-13 1:39
Divin Ookken Athappan30-Jun-13 1:39 
GeneralMy vote of 5 Pin
Mahdi Nejadsahebi14-May-13 20:32
Mahdi Nejadsahebi14-May-13 20:32 
GeneralRe: My vote of 5 Pin
Divin Ookken Athappan30-Jun-13 1:40
Divin Ookken Athappan30-Jun-13 1:40 
QuestionWITH DICOM IMAGES Pin
dileep Perumbavoor5-Mar-13 22:48
dileep Perumbavoor5-Mar-13 22:48 
AnswerRe: WITH DICOM IMAGES Pin
Divin Ookken Athappan6-Mar-13 23:44
Divin Ookken Athappan6-Mar-13 23:44 
GeneralRe: WITH DICOM IMAGES Pin
Divin Ookken Athappan10-Mar-13 6:50
Divin Ookken Athappan10-Mar-13 6:50 
GeneralRe: WITH DICOM IMAGES Pin
dileep Perumbavoor10-Mar-13 20:35
dileep Perumbavoor10-Mar-13 20:35 
GeneralRe: WITH DICOM IMAGES Pin
Divin Ookken Athappan17-Mar-13 1:42
Divin Ookken Athappan17-Mar-13 1:42 
GeneralRe: WITH DICOM IMAGES Pin
dileep Perumbavoor17-Mar-13 22:53
dileep Perumbavoor17-Mar-13 22:53 
Question1024x1024 bitmap to 3D graph Pin
MIrchhh24-Feb-13 3:13
MIrchhh24-Feb-13 3:13 
QuestionRe: 1024x1024 bitmap to 3D graph Pin
Divin Ookken Athappan6-Mar-13 23:55
Divin Ookken Athappan6-Mar-13 23:55 
AnswerRe: 1024x1024 bitmap to 3D graph Pin
MIrchhh12-May-13 3:18
MIrchhh12-May-13 3:18 
QuestionIs It wrong? Pin
Englishclive25-Jan-13 19:31
Englishclive25-Jan-13 19:31 
AnswerRe: Is It wrong? Pin
Divin Ookken Athappan30-Jan-13 16:53
Divin Ookken Athappan30-Jan-13 16:53 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.