Click here to Skip to main content
15,867,308 members
Articles / Desktop Programming / Win32

Half Life Game Level Viewer

Rate me:
Please Sign up or sign in to vote.
4.61/5 (23 votes)
7 Feb 2009CPOL26 min read 79.3K   2.3K   60   17
DirectX based application to open and view Half Life 1 game files

 HLPic2.JPG

Introduction

A few years ago I became interested in first person shooter games and in particular how the world levels are created and rendered in real time.  At the same time I found myself in between jobs and so I embarked on an effort to learn about 3D rendering with the goal of creating my own 3D rendering engine.  Since I am a developer and not an artist I didn’t have the skills to create my own models, levels, and textures.  So I decided to attempt to write a rendering engine that would render existing game levels.  I mainly used information and articles I found on the web about Quake 2, Half Life, WAD and BSP files.  In particular I found the Michael Abrash articles that he wrote for Dr. Dobbs magazine while working at Id to be very illuminating.

I had a lot of fun writing this application and it occurred to me that others who are interested 3D game development might find the source code useful in learning about 3D rendering.  This application currently only loads and renders Half Life levels.  However, I believe that the newer Half Life 2 environments still use BSP files that are extended for the newer rendering features (but I haven’t looked into this in any great detail).  If it is true that Half Life 2 levels use an extended version of the older BSP files than someone should be able to take this source code and likewise extend it to render these newer level files.  This application does not do any animation and so doors don’t open and elevators don’t move.  But all of the information for entity and model animation resides in the BSP files and anyone with interest should be able to add these animations using this source code as a starting point.

Note that I have no affiliation whatsoever with either Valve or Id, and that I created this rendering application all on my own.  Also please note that this project does not include any Half Life BSP or WAD files.  These files are the intellectual property of Valve and so to obtain them for use in this viewer application you will need to purchase the game from Valve.  You can get the Half Life 1 game (for only $9.99) from Valve at http://store.steampowered.com/app/70/.  This is what I did and then simply performed a search for BSP and WAD file extensions in my Half Life game install directory.  Instructions on how to arrange the binary, BSP, WAD files so that you can run the level viewer application is provided in the “Using the Code” section below.

Background

This is a Windows application, using DirectX 9, which reads Half Life 1 BSP and WAD files and renders the static levels with textures and light maps applied.  It also renders scene entities, which are additional objects such as crates, doors, windows, grates, and elevators.  This application does not support animation and so these scene entities are static.  To allow navigation through the levels, most of these entities that are interactive or animated in the game are rendered here without collision detection so that you can simply walk through them.  They are also rendered with a small amount of transparency to indicate that they are physically transparent.  

The levels are rendered in first person perspective and so you the player are the camera viewing the level environment.  You can move the camera through the levels in a typical FPS fashion using keyboard and mouse input to look, run/walk/strafe, crouch, jump, and wall slide.  The camera object includes a bounding-box roughly the size of the Half Life character in the game, and collision detection is implemented so that you can navigate through the levels moving up and down stairs and ramps, perform wall sliding, etc.  Since elevators and other animations are not implemented there is a “levitate” feature so that you can move vertically to areas only accessible via elevators and ladders.  There is also a simple flash light feature (implemented using a flashlight light map and a simple vertex shader) that lets you illuminate the darker areas such as vents.
The application runs in windowed mode and full screen mode.  When in windowed mode viewing (looking) can only be done using keyboard commands, since the mouse is used for accessing Window menus.  When in full screen mode you can use the mouse to look.  Keyboard commands are defined through a map object that can be changed in code but is not currently modifiable through the application UI. 

Using the code

Requirements
·    PC running Windows XP OS plus DirectX 9 (should run on Vista but I haven’t tried it)
·    DirectX 9 compatible 3D video card
·    D3Dx9_27.dll (D3DX helper library, freely distributed by Microsoft)

Directions
Make sure all required binaries exist in one directory:
·    HLViewer.exe        (Main application executable file for viewer)
·    GraphicsEngine.dll    (DLL containing all rendering code)
·    D3Dx9_27.dll        (D3DX helper library from Microsoft, used for shader compiling)
·    VertexShader1.fx    (Simple vertex shader for flashlight effect)
Run the “HLViewer.exe” executable and use the File-Open menu to open a BSP file.  The directory that contains the BSP file must also contain all WAD (texture) files referenced by the BSP file.  For example a “maps” directory might look like:
·    c1a0.bsp (first HL level file)
·    halflife.wad
·    cached.wad
·    decals.wad
·    gfx.wad
·    liquids.wad
·    spraypaint.wad
·    xeno.wad

Controls
Mouse           - moves player look view (full screen mode only)
'w' key          - move forward
's' key           - move backward
'a' key           - move (strafe) left
'd' key           - move (strafe) right
'f' key            - toggle flashlight
'x' key            - levitate  (windowed mode only)
'Arrow keys'    - look left/right, up/down
'Space' key     - jump
'Shift' key       - Run
'Control' key    - crouch
'Tab' key         - toggle windowed / full screen modes
Rt Mouse Btn    - levitate  (full screen mode only)

Project Overview
This Half Life level viewer rendering application is written in C++ and is organized in a single Microsoft Visual Studio solution (“ZGraphics”).  I have the 2003 version of Visual Studio but the solution and projects should open and build perfectly fine on any subsequent versions. 

In order to build these projects you will need the DirectX 9 SDK from Microsoft, and make sure the project has the correct path to the include and lib files.  You can download the latest SDK from:  http://msdn.microsoft.com/en-us/directx/aa937788.aspx.   

The ZGraphics solution contains two projects: Application project and GraphicsEngine project. 

The Application project is small and contains the CApplication class which handles all windows functions such as the message pump, render loop, menu handling, mouse and keyboard input handling, graphics initializing and loading, and persisted settings.  This project builds into the HLViewer.exe executable file.

The GraphicsEngine project contains most of the really interesting parts and includes code for loading and parsing the BSP file, loading textures from the WAD files, creating and manipulating the FPS camera, rendering the static geometry and entities, and do collision detection.  This project builds into the GraphicsEngine.dll binary file, which is referenced by the HLViewer.exe executable.

My original intention was to make this code portable to other windowing UI systems and 3D rendering APIs.  However, I only created a Windows version using DirectX and in the interest of time I didn’t keep boundaries as clean as I originally intended.  But the program is built around four interfaces intended to abstract out platform specific functions:
·    IWApplication    - Application window interface.
·    ISceneGraph     - Load and render scenes.  Manipulate camera, collision detection.
·    ICamera           - Create and manipulate the FPS camera.
·    IRenderer         - 3D rendering API (pretty much just a wrapper for Direct3D).

Helper templates
I wanted to learn more about C++ generics and so I decided to create my own template based collection, math, and sorting classes.  These classes are in subdirectories under the GraphicsEngine source directory.  The subdirectories are:
·    Collections    - contains array, list, map, set, string classes.
·    Math            - contains 3 and 4 dimension matrix and vector classes.
·    Sorting         - contains sorting algorithms QuickSort, HeapSort, MergeSort.
·    MemoryMgr    - contains a simple pool object allocation class.

Application project
The two interesting classes here are the CInput class and the CApplication class.  The CInput class wraps the DirectX8 input mouse/keyboard input functionality.  The CApplication class implements the IWApplication interface and handles all Windows functions, as well as containing the rendering loop.  This project builds the HLViewer.exe executable file and references the GraphicsEngine.dll binary.

GraphicsEngine project
This is the project that does all of the interesting work.  It builds into the GraphicsEngine.dll binary and is referenced by the main HLViewer.exe application.  The ICamera, IRenderer, and ISceneGraph interfaces are all implemented in this project.

FPS Camera
In this Half Life viewer application the camera is more than a traditional 3D camera that defines viewing parameters used for scene projection.  In an FPS game the camera also represents the player and since a player can run and jump through the levels I included this functionality in the Camera object.  So the ICamera interface defines support for bounding-box, moving, jumping, and crouching/rising as well as viewing information.  The main classes that implement ICamera are FPSCamera, CFrustum, and CJump. 

The CFrustum class contains viewing information such as aspect ratio and focal length along with the geometric planes that define the viewing frustum.  In addition it contains a public method that will test a given bounding-box with the frustum and return true if any part of the bounding-box lies within the frustum.  This method is used to cull geometry that can’t possibly be visible because it exists outside the current viewing frustum.

The FPSCamera class encapsulates all things needed for an FPS camera so it contains the frustum objects along with all information necessary to move the camera inside a level.  There is support for walking/running, strafing, jumping (with simple gravity), crouching, and levitation.  It also includes bounding boxes that define the player character extents while both “standing” and “crouching”.  The bounding-box is used for detecting collisions with the environment such as walls, floor, ceiling, and entity objects.  Since motion in time is involved with moving the camera there are velocity parameters and a method to update all parameters based on the time change during each rendering instance.  So, really, the camera object is the only thing animated in this level viewer.  Keyboard and mouse input is used to modify camera motion parameters.  For example standard ‘a’, ‘s’, ‘d’, ‘w’ keys are used to move in one of four directions.  The arrow keys or the mouse is used to change viewing direction (up, down, and side to side).  There is also support to make the player crouch and then rise up from the crouch.  Collision detection is used in conjunction with keyboard input to prevent the player from rising from a crouch when underneath something and poking his head through the level geometry.  This is particularly useful when crawling through vents.

DX Rendering API
When I started this project I had little to no experience with 3D APIs.  Originally I intended to make the IRenderer interface very abstract so that an implementation could be created for any existing 3D API (like DirectX or OpenGL).  But time didn’t allow researching two different 3D APIs and so I just went with DirectX.  I don’t know how realistic it is to try and make a general rendering interface but it might be worth looking into, especially if someone wanted to port this to Linux and/or OpenGL.

The IRenderer interface is implemented in the DXRenderer class and in most cases the methods pass directly to D3D APIs.  This class also handles initializing Direct3D and creating all necessary devices.  Note that here I extensively used D3D settings, device and capability enumeration code that is provided by the Microsoft DirectX 9 SDK.

BSP Data
The BSP data contained in the game level BSP file (e.g., c1a3.bsp) completely defines that level and all entities and models in that level, with the exception of textures.  When I was researching this a few years ago there were quite a few websites that provided information on how this file is structured and I used this information to create helper classes for loading level information into memory and provide access to this information.  I haven’t looked into this very much but I believe that the newer Half Life 2 level files are extensions of these older Quake 2 based BSP files and so it should be possible to extend these helper classes to load the newer level information.

The BSP file is organized into different sections (as defined by file offsets) that are called “data lumps”.  Each data lump section contains an array of data the structure of which is defined by a C language struct data type.  The BSP file data structs are defined in BSPFileDefs.h.  There is a helper class, BSPFile, which will open a BSP file and read in each data lump into an Array object.  There is another helper class, BSPData, which contains all of the data lumps read in by BSPFile object.  It also contains a BSP tree object created from the BSP lump data and helper methods for point, ray, plane, and bounding-box intersections used for collision detection.  This object also contains visibility information for BSP leafs and entities used in rendering to cull non-visible geometry.  To understand this class you will need to search on Quake2 and Half Life BSP files.  I also highly recommend Michael Abrash’s articles on BSP trees and visible surface determination.

Entity objects are also part of the BSP file.  Entity objects are objects residing inside the level static geometry that are animated or may cause some trigger to occur in the game.  Some entities are to be rendered (such as doors, crates, etc.) and others are only used to trigger actions and are not meant to be rendered.  Entities rendered in this viewer application are not active and so I skip doing collision detection with them and let the user walk through them.  In addition I render them with some transparency to indicate that these are not solid objects.  Doors, windows, and breakable crate entities are all treated this way.  I couldn’t find much information regarding entities and so I had to figure out the different kinds through class names and experimentation.  The result is that some entities are “solid” and cannot be passed through.  Other entities which are clearly not meant to be rendered are sometimes rendered (such as trigger points) and look weird.  It wouldn’t be difficult to find each of the cases and add code to deal with them but I decided to move on to other things.  One other item of note is that these entity objects are not part of the BSP tree or visibility sets (as far as I can tell) and so I do a preprocessing step (at the time the BSP file is loaded and parsed) that maps level geometry BSP leafs to all entities that reside in them.  There are two maps.  One object maps potentially viewable entities to each BSP leaf so that only entities that might be viewable continue through the rendering code.  The other object maps entities, which reside in whole or in part inside a BSP leaf, to that leaf.  Any entities that intersect the BSP leaf bounding-box are included in the map.  During collision detection with entity objects only those entities that exist in the same BSP leaf as the character are tested for intersection with the camera bounding-box. 

void BSPData::LoadData(const char * pszFilename)
{
  _cleanUp();

  BSPFile bspFile;
  try
  {
    bspFile.Open(pszFilename);

    // Load all lumps from BSP file
    m_pVertices = bspFile.LoadVertices();
    assert(m_pVertices);

    m_pFaces = bspFile.LoadFaces();
    assert(m_pFaces);

    DataLump<bspf_plane> * pPlanes = bspFile.LoadPlanes();
    m_pSPlanes = _convertToSPlane(pPlanes);
    assert(m_pSPlanes);
    DELETE_PTR(pPlanes);

    m_pEdges = bspFile.LoadEdges();
    assert(m_pEdges);

    m_pFaceEdges = bspFile.LoadFaceEdgeTable();
    assert(m_pFaceEdges);

    m_pTextInfo = bspFile.LoadTextureInfo();
    assert(m_pTextInfo);

    m_pTextLump = bspFile.LoadTextureLump();
    assert(m_pTextLump);

    m_pLeafs = bspFile.LoadLeaves();
    assert(m_pLeafs);

    m_pLeafFaces = bspFile.LoadLeafFaceTable();
    assert(m_pLeafFaces);

    m_pVisData = bspFile.LoadVisibility();
    assert(m_pVisData);

    m_pNodes = bspFile.LoadBSPNodes();
    assert(m_pNodes);

    m_pLightMaps = bspFile.LoadLightMaps();
    assert(m_pLightMaps);

    m_pEntities = bspFile.LoadEntities();
    assert(m_pEntities);

    m_pModels = bspFile.LoadModels();
    assert(m_pModels);

    bspFile.Close();

    // Create bsp visibility set data
    _decompressVisSets();

    // Create the BSP tree
    _buildBSPTree();

    // Create entity and entity visibility set data
    _createEntityData();
  }
  catch (char * pszMessage)
  {
    bspFile.Close();
    throw(pszMessage);
  }
}


 
Texture Data
All textures for BSP levels reside in WAD files.  Each BSP level file references one or more WAD files.  The helper class (Textures) facilitates loading textures for a BSP file into memory by taking a BSPData class object for a selected level and querying for the “worldspawn” entity to find all WAD files referenced by that level.  Then each of these WAD files is opened and all textures referenced by BSP data face data are loaded into a texture cache.  The textures from WAD files are all palletized and so there is a helper function (_createRGBTexture) to convert them to ARGB textures.

Note that these textures cannot be used as is with the DX renderer and so there is another conversion step and cache in the HFBSPGraph object that converts these textures to the usable DX version.  Light map information is stored in a BSP object data lump.  The Textures class includes a public method for taking this light map data and creating an RGB texture that can be used in the rendering object (HFBSPGraph class).

// Load texture map
for (int n=0; n<BSPData.TextInfo()->m_cSize; n++)
{
  int nMipTex = BSPData.TextInfo()->m_pArray[n].nMipTex;

  if (m_Textures.IsInMap(nMipTex) == 0)
  {
    // Load texture from WAD file and add to map
    wtexture WTexture;
    const bspf_miptex * pMipTex = BSPData.MipInfoPtr(nMipTex);
    for (int i=0; i<(int)wadFiles.GetArrayCount(); i++)
    {
    WTexture.pTexture = (wadFiles[i])->LoadTexture(pMipTex->szname);
      if (WTexture.pTexture)
        break;
    }

    assert(WTexture.pTexture);

    if (WTexture.pTexture != 0)
    {
      // Convert palettized texture into DWORD XRGB texture
      _createRGBTexture(WTexture.pTexture, &WTexture.RGBTexture);
      assert(WTexture.RGBTexture.pRGBTex);

      // Add texture to map
      m_Textures.Add(nMipTex, WTexture);
    }
  }
}


Collision Detection
Collision detection was probably the most difficult part of this project because it was very hard to get it right.  Most of the collision detection code, i.e., the code that finds intersections between level geometry/entities and the camera bounding-box, is in the BSPData class, along with the helper classes PolyFace and PolyObject.  My thinking here was that BSPData class should provide an intersection detection service between a passed in bounding-box and the data contained inside the class.  The scene object (HFBSPGraph class) contains the camera object and is responsible for (among other things) preventing the camera object (bounding-box) from penetrating into the level geometry or entities.  It does this by passing the camera bounding-box object into an intersection detection method in the BSPData class.  What is returned is information about whether any intersection has occurred and if so then intersection information (intersection plane, point, and penetration depth) is passed back.  Note that there can be multiple intersections with various objects and geometry in the level.  The scene object then has to figure out how to adjust the camera position so that it doesn’t penetrate into some geometry.  The camera position is adjusted perpendicularly to the intersection plane but is allowed to move along the plane to effect a “wall slide”.

The collision detection works very well for the most part but is not perfect.  There may be a better way to detect and collect geometry intersections and I would be very interested in hearing about them (one thing I have heard is something called “pushing a bounding-box through a BSP tree, carving the box into multiple polygons until a face intersection is detected … but it is not clear to me if it is any better than what I am doing here).  Actually all of the problems I have found so far are not due to collision detection, but instead to a failure in the scene code to adjust the camera position correctly in response to the collision.

In any case I try to do collision detection in an efficient manner by quickly rejecting large swaths of geometry that cannot possibly collide with the camera, and then perform more exact tests where an intersection is possible.  In the case for entity objects I only test entities that are pre-computed to exist in the same BSP leaf as the camera, and this test is fast because each entity comes with its own bounding-box.  For the level geometry I walk the BSP tree performing bounding-box intersection tests between the camera and the BSP node boundary.  Each BSP node has a bounding-box associated with it that contains the space it carves out and includes all of the child nodes underneath it.  If no intersection is detected between the camera bounding-box and the node bounding-box then that entire node is rejected.  If a potential intersection exists then the camera is next tested with the splitting plane associated with that node and all child nodes.  If a splitting plane intersection is detected then a final test of whether there is an intersection with an actual rendered face is performed.

Note that liquid content such as water and lava is detected in the collision routines and specifically ignored so that the player can walk through and submerge into liquid content.  However, the liquid textures are not currently animated as they are in the game.

void BSPData::BBIntersectGeometry(const Vector3f vBB[2], Array<SectInfo>& aIntersections) const
{
  _bbIntersectGeom_r( m_pBSPTree->Head(), vBB, aIntersections );
}


Matrices
It took me a while to get the world matrix correct for the Half Life level data.  It turns out that Direct3D uses a left-handed coordinate system and the Half Life level data uses a right-handed system.  So when creating world transformation matrix I had to take this into account.  The projection matrix is computed based on the viewing information (contained in FPSCamera object) and depends on aspect ratio, focal length, etc.  The aspect ratio is computed from the screen aspect ratio, both for windowed and full-screen modes.  The view matrix is recomputed each time through the render loop since the view orientation will likely change through user input.  All three matrices are handed to the rendering object where Direct3D uses them to render the scene.

Placing Camera
When the level is first loaded the camera (player) needs to be placed in a valid location inside the level geometry.  In the game this is probably done by passing location information between level transitions.  For this level viewer application I use an entity named “info_player_start”.  This entity has an origin coordinate that I use as the starting position of the camera. 

Once the camera is safely inside the level the rendering loop begins and the user can move the camera around using keyboard commands, for each rendering time slice.  But if the user moves the camera into a wall or entity object then this must be prevented and the camera location is adjusted accordingly.  There is a method in the HFBSPGraph class (_adjustCamPosition) that uses the collision detection methods in the BSPData object and if any collision is found then adjusts the camera position appropriately.  This code turned out to be pretty complex because I wanted the player to be able to walk along walls and tables, walk up and down stairs, walk up ramps of a specified maximum elevation angle, jump from one level to another, levitate, crouch and rise.  This means testing the camera bounding -box in all directions and providing special behavior for lateral and vertical movements.  This all works reasonably well but it isn’t perfect and there is room for improvement.

void HFBSPGraph::_placeCamera()
{
  const EntVars * pEnt = m_BSPData.FindEntity(cString("info_player_start"));
  if (pEnt)
  {
    m_pCamera->Position() = pEnt->vOrigin;
    m_pCamera->SetYawAngle(pEnt->fYawAngle);
  }
  else
  {
    m_pCamera->SetYawAngle(0.0f);
  }

  m_pCamera->SetCrouched(false);

  // Make sure camera is not embedded in floor
  _adjustCamPosition(Vector3f::cZero, true);
}

Collecting visible geometry
Once the camera is placed for a particular rendering instance it can be located within the BSP tree and visible faces are collected for that location.  Visible faces are collected in two array objects, one for the static level geometry (as defined by the BSP tree) and the other for any potentially visible entity objects. 

For the level geometry the BSP tree is walked, starting at the leaf the camera currently resides in, in front to back order.  Since BSP nodes contain bounding-boxes, each node is checked against the viewing frustum and if there is no intersection (i.e., the BSP node does not reside inside the viewing frustum) then it and all of its children are quickly rejected.  In addition if any candidate leaf that is not in the potentially visible set (PVS) of the node the camera is in, then that leaf is rejected as well (note that BSP leafs contain all face information, the non-leaf nodes only contain splitting plane information).  At the end what is left is a collection of a subset of all faces that have a good probably that they are visible and therefore must be rendered.  I walk the BSP tree in front to back order because I read somewhere that many z-buffer capable hardware can more efficiently reject hidden pixels if rendering is front to back.  I don’t know how true this is or even if it makes a real performance difference but since I have the BSP tree I thought I might as well use it.  There may also be good arguments for walking the BSP tree in back to front order (painters algorithm), and it wouldn’t be difficult to make this change.

For the entity objects I use the pre-computed map of visible entity objects that was created while loading entities.  This map lists all potentially visible entity objects from the leaf node that the camera is currently in.

There is a final check that further culls the faces lists to remove any faces that are pointing away from the camera or that reside outside the viewing frustum.  This is probably overkill given the capability of today’s hardware, but since this was a learning experience for me I wanted to go the extra mile in culling all non-visible faces.  This desire was probably due to the Michael Abrash articles I read, which were written at a time when squeezing out the last bit of performance from code was very important.

const Array<int> * HFBSPGraph::_visibleLeafsFtoB()
{
  assert(m_pCamLeaf);

  m_VisLeafs.Clear();

  // Get visibility set for this leaf
  const unsigned char * pVisSet = m_BSPData.VisSet(m_BSPData.Leafs()->m_pArray[m_pCamLeaf->nLeaf].ofsCluster);
  assert(pVisSet);

  // Walk BSP tree front to back, culling nodes outside camera frustum, collecting faces to draw
  // only in visible leaves (using PVS).
  const BSPNode * pCurrNode = m_pCamLeaf;
  const BSPNode * pParent = m_pCamLeaf->pParent;
  _collectLeafsFtoB_r(pCurrNode, &m_VisLeafs, pVisSet);
  while (pParent)
  {
    if (pParent->pFront == pCurrNode)
    {
      _collectLeafsFtoB_r(pParent->pBack, &m_VisLeafs, pVisSet);
    }
    else
    {
      _collectLeafsFtoB_r(pParent->pFront, &m_VisLeafs, pVisSet);
    }

    pCurrNode = pParent;
    pParent = pCurrNode->pParent;
  }

  return &m_VisLeafs;
}

Flashlight
The initial motivation for the flashlight feature was so that I would be able to see better in some of the darker areas of the levels.  I made a cheap flashlight by creating by hand a “light map” texture, which has a circular shape with intensity diminishing at the outer radius of the texture map.  At first I implemented this by projecting this new light map over the whole scene when the user selects the flashlight option.  The problem with this implementation is that this light map is applied flat to the whole scene after it has been rendered as texture operation, and so the flashlight shape is always circular.  I attempted to correct this by doing a little research on shaders and then creating a vertex shader that scales the flashlight light map texture coordinates based on the depth coordinate of the vertex.  This makes for a much more realistic looking flashlight effect but since only the texture coordinates at geometry vertices are scaled, there are visible triangle artifacts where the light map is applied to large faces.  A better flashlight effect could be created with pixel rather than vertex shaders, but I ran out of time before I could look further into it.

The vertex shader is very simple.  It exists in the VertexShader1.fx file, which is compiled during application initialization using the D3DX helper library (D3Dx9_27.dll).

VS_OUTPUT VS_Flashlight(
    float4 inPos : POSITION,    // Vertex position in HL space
    float2 inTex0 : TEXCOORD0,    // tex 0 coordinate, precomputed light-map coordinate
    float2 inTex1 : TEXCOORD1)    // tex 1 coordinate, precomputed face texture coordinate
{
  VS_OUTPUT Out = (VS_OUTPUT)0;

  // project position to unit cube
  Out.Pos = mul(inPos, WorldViewProj);
   
  // compute the flashlight texture coordinates based on camera space position
  float3 vcoords = mul(inPos, WorldView);
  float flength = max(length(vcoords), 150.0f);
  const float fscale = 1.7f;
  const float fadjust = 0.5f;
  Out.tex1.x = ((fscale * vcoords.x) / flength) + fadjust;
  Out.tex1.y = ((fscale * vcoords.y) / flength) + fadjust;
   
  // pass through precomputed texture coordinates
  Out.tex0 = inTex0;
  Out.tex2 = inTex1;

  return Out;
}

Rendering Scene
The actual rendering of the scene is pretty straightforward.  The collected static geometry faces are first rendered and then the entity object faces are rendered.  The faces are rendered using the DrawPrimitive and SetTexture methods of the IRenderer object.  This is all done within the BeginScene and EndScene IRenderer object methods, which I believe for DirectX lets the driver first collect the data before sending it to the hardware and thus making the minimum number of user/kernel transitions.

void HFBSPGraph::Render()
{
  assert(m_pRenderer);
  assert(m_pCamLeaf);

  // Set the view (camera) matrix based on current camera world position
  Matrix4f mtxView;
  _createViewMatrix(mtxView, m_mtxWorld);
  m_pRenderer->SetTransform(D3DTS_VIEW, mtxView);

  HRESULT hr = m_pRenderer->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0xff303030, 1.0f, 0);
  hr = m_pRenderer->BeginScene();
  if (SUCCEEDED(hr))
  {
    // Render map and entity brush geometry.
    _setSGRenderStates();
    _renderStaticGeometry(mtxView);
    _renderEntities(mtxView);

    // Render studio models

    m_pRenderer->EndScene();
  }

  m_pRenderer->Present(0, 0, 0, 0);
}

Points of Interest

I tried to touch on the main areas of this Half Life level viewer application.  Of course there are many details that were left out and you will need to refer to the source code to see exactly how things are done.  I recommend that you walk through various sections of the code with the debugger to see how things work.  In addition a basic grounding of BSP/WAD files, Direct3D rendering, and some linear algebra will help in getting to know and understand this code.  I learned a great deal while writing this application and enjoyed the process immensely.  I hope that others will find this code helpful in learning 3D programming and have as much fun as I did with it.  

History

First draft created on January 21, 2009

Updated on February 8, 2009.  Added new code download (HLViewer_2.zip).  This code version contains some code clean up plus a large increase in code comments.  The new code comments should help readers better understand data structures and class functions used in the application.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
United States United States
I am a senior software developer currently doing contract work for Microsoft. My educational background is in electrical engineering and I hold a masters degree from the University of Washington. I have experience in hardware and systems design but have done primarily software development for the last two decades. I have worked for various small companies as well as start-up companies, and have worked as a full time employee SDE at Microsoft Corporation.

Comments and Discussions

 
QuestionAwesome! Pin
ZHENG.YANG.POINTER28-Dec-13 18:27
professionalZHENG.YANG.POINTER28-Dec-13 18:27 
GeneralThis is really cool! Pin
Qwertie12-Feb-09 5:33
Qwertie12-Feb-09 5:33 
GeneralRe: This is really cool! Pin
Paul Higinbotham13-Feb-09 5:42
Paul Higinbotham13-Feb-09 5:42 
QuestionIs there a Visual Studio 2008 version? Pin
JonathanVQP10-Feb-09 14:31
professionalJonathanVQP10-Feb-09 14:31 
AnswerRe: Is there a Visual Studio 2008 version? Pin
Paul Higinbotham10-Feb-09 15:54
Paul Higinbotham10-Feb-09 15:54 
Questionwhy did this happen? Pin
linzaihui1-Feb-09 16:29
linzaihui1-Feb-09 16:29 
AnswerRe: why did this happen? Pin
Paul Higinbotham1-Feb-09 18:44
Paul Higinbotham1-Feb-09 18:44 
The assert fires because of an invalid index into an array object. The Array class is used many places inside the code. Can you provide a stack trace after the assert fires? I assume you were loading a bsp file when this occurred? If so can you provide the name of the bsp file? Note that I have used this application successfully with almost all bsp files from the original "Half Life 1" game, and many for the "Opposing Force" sequel. However, there may be other Half Life sequel levels that don't open because of some change in the bsp file structure. Another possibility may be missing WAD files, although I don't know why that would cause an invalid index reference.

Paul
GeneralRe: why did this happen? Pin
linzaihui1-Feb-09 20:56
linzaihui1-Feb-09 20:56 
GeneralRe: why did this happen? Pin
Paul Higinbotham2-Feb-09 6:07
Paul Higinbotham2-Feb-09 6:07 
GeneralDense reading! Pin
Dragonaur28-Jan-09 10:41
Dragonaur28-Jan-09 10:41 
GeneralRe: Dense reading! Pin
Paul Higinbotham28-Jan-09 11:45
Paul Higinbotham28-Jan-09 11:45 
GeneralRe: Dense reading! Pin
JasonPSage10-Feb-09 6:41
JasonPSage10-Feb-09 6:41 
GeneralRe: Dense reading! Pin
Paul Higinbotham10-Feb-09 15:55
Paul Higinbotham10-Feb-09 15:55 
GeneralGreat, just a small suggestion Pin
Sotaio22-Jan-09 1:04
Sotaio22-Jan-09 1:04 
GeneralRe: Great, just a small suggestion Pin
Paul Higinbotham22-Jan-09 5:46
Paul Higinbotham22-Jan-09 5:46 
GeneralIt would be nice to see some code in the article Pin
Sacha Barber22-Jan-09 0:52
Sacha Barber22-Jan-09 0:52 
GeneralRe: It would be nice to see some code in the article Pin
Paul Higinbotham22-Jan-09 5:51
Paul Higinbotham22-Jan-09 5:51 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.