Click here to Skip to main content
14,977,910 members
Articles / Mobile Apps / Android
Posted 17 Oct 2011


101 bookmarked

Fountain OpenGL Application Walkthrough

Rate me:
Please Sign up or sign in to vote.
4.92/5 (52 votes)
27 Mar 2013CPOL31 min read
Create a basic fountain scene using OpenGL ES 1.1
Image 1


This walkthrough will cover the creation of an OpenGL application including the following topics:

  • Angle Calculation
  • Perspective
  • Billboarding
  • Depth Buffer
  • Multipass Rendering
  • Animation
  • Accelerometer
  • Touch events
  • Persisting user settings

The application allows the user to:

  • Move the camera anywhere in the scene
  • Rotate the scene or the camera
  • Show and hide objects in the scene
  • Display the FPS
  • Change the billboard method
  • Use the phone angle to set the view angle

The project is built using Eclipse and the Android SDK.


I created this app as an exercise for learning OpenGL. I couldn't find a fountain app for the Android, so I figured that was a good place to start. About 10% of Android users are still using OpenGL ES 1.1 so I wrote this application using that version.

This tutorial assumes you already have the Eclipse environment up and running. If you are new to Eclipse and Android development, I recommend going through the temperature converter tutorial which can be found here.

Using the Code

You can create the project by going through the steps listed below. If you prefer to load the entire project, download\unzip the project file, then open Eclipse and choose File->Import..->General->Existing Projects and choose the root folder of the FountainGL project.

Let's begin:

Start Eclipse (I'm using Eclipse Classic version 3.6.2).

Choose File -> New -> Project -> Android -> Android Project

Image 2

Click Next.

Fill in the fields as shown below. You can use any version of Android 2.1 or later.

Image 3

Click Finish.

Once the project is created, add this icon to the AutoRing\res\drawable-hdpi folder. You can drag it directly to the folder in Eclipse or you can use Windows Explorer. Overwrite the existing file in that folder.

Image 4 icon.png

If you are not using a high resolution device (you probably are), you can copy the icon to the drawable-mdpi and drawable-ldpi folders also.

Right Click on the FountainGL project and choose New->Class.

Image 5

Enter the Name, Package and Superclass as shown below. Also check the 2 checkboxes indicated (though we will overwrite these method stubs).

Image 6

Click Finish.

Coding the FountainGLRenderer Class

This class will contain the bulk of our application code.


Image 7

Remove all the existing code from this file.

Add the package name and imports needed for our application.

package droid.fgl;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import javax.microedition.khronos.opengles.GL11;

import android.content.Context;
import android.content.res.Configuration;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.opengl.GLSurfaceView;
import android.opengl.GLSurfaceView.Renderer;
import android.opengl.GLU;
import android.os.Handler;
import android.os.SystemClock;
import android.view.MotionEvent;
import android.widget.FrameLayout;
import android.widget.TextView;

Create the FountainGLRenderer class. Our class will implement Renderer so we can combine our render code and the OpenGL callbacks in a single class.

//extend GLSurfaceView and implement Renderer to keep all code in single class
public class FountainGLRenderer extends GLSurfaceView implements Renderer

Add the variables needed for the fountain and ball animation. elapsedRealtime() returns the number of milliseconds since system bootup.

private static float mAngCtr = 0; //for animation
long mLastTime = SystemClock.elapsedRealtime();

Add the variables needed for processing touch\drag events.

//for touch event - dragging
float mDragStartX = -1;
float mDragStartY = -1;
float mDownX = -1;
float mDownY = -1;

Add the variables used to store camera angle and position. We add .0001 to initial values because exact right (or 0) angles can lead to divide by 0 errors. We could check for 0 at each calculation, but this is easier.

//we add the .0001 to avoid divide by 0 errors
//starting camera angles
static float mCamXang = 0.0001f;
static float mCamYang = 180.0001f;
//starting camera position
static float mCamXpos = 0.0001f;
static float mCamYpos = 60.0001f;
static float mCamZpos = 180.0001f; 

Add the variables used to set the camera view direction.

//distance from camera to view target
float mViewRad = 100;
//target values will get set in constructor
static float mTargetY = 0;
static float mTargetX = 0;
static float mTargetZ = 0; 

Add the variables used to set the scene rotation angle.

//scene angles will get set in constructor
static float mSceneXAng = 0.0001f;
static float mSceneYAng = 0.0001f; 

Add the variables used to store screen information.

float mScrHeight = 0; //screen height
float mScrWidth  = 0; //screen width
float mScrRatio  = 0; //width/height
float mClipStart = 1; //start of clip region 

Add the constants used for angle conversion.

final double mDeg2Rad = Math.PI / 180.0; //Degrees To Radians
final double mRad2Deg = 180.0 / Math.PI; //Radians To Degrees 

Add the mResetMatrix flag. This is set whenever the camera moves forward or back so we can update the clip region.

boolean mResetMatrix = false; //set to true when camera moves

Add the variables used for FPS (Frames Per Second) calculation and display. Note the TextView can also be used to display debug information.

int[] mFrameTime = new int[20]; 		//frames used for avg fps
int mFramePos = 0; 			//current fps frame position
long mStartTime = SystemClock.elapsedRealtime(); //for fps
int mFPSDispCtr = 0; 			//fps display interval
float mFPS = 0; 				//actual fps value

TextView mTxtMsg = null; 			//for displaying FPS
final FountainGLRenderer mTagStore = this; 	//for SetTextMessage
Handler mThreadHandler = new Handler(); 	//used in SetTextMessage

Add the object index constants and buffer length array. We will store the vertex array in the GPU memory which requires an index and length when reading. We can't use 0 as an index because it is reserved by OpenGL.

//constants for scene objects in GPU buffer
final int mFLOOR = 1;
final int mBALL  = 2;
final int mPOOL  = 3;
final int mWALL  = 4;
final int mDROP  = 5;
final int mSPLASH = 6;

//need to store length of each vertex buffer
int[] mBufferLen = new int[] {0,0,0,0,0,0,0}; //0/Floor/Ball/Pool/Wall/Drop/Splash

Add the parameters used for object creation. These are optimized for my Hauwei Ideos. mBallHSliceCnt must be even because we will render the ball in 2 halves.

//ball parameters
int mBallRad = 10; //radius
int mBallVSliceCnt = 32; //slices vertically - latitude line count
int mBallHSliceCnt = 32;  //slices horizontally - longitude line count - must be even

//fountain parameters
int mStreamCnt = 10; //should divide evenly into 360
int mDropsPerStream = 30; //should divide evenly into 180
int mRepeatLen = 180/mDropsPerStream; //distance loop for drop
float mArcRad = 30; //stream arc radius
//for storing drop positions //3 floats per vertex [x/y/z]
float[][] dropCoords = new float[mStreamCnt*mDropsPerStream][3];

//pool parameters
int mPoolSliceCnt = mStreamCnt; //side count
float mPoolRad = 57f; //radius

Add the variables used to store the accerometer values. The accelerometer can be used to set the camera view angle. mOrientation stores the current phone orientation.

//accelerometer value set by activity
public float AccelZ = 0;
public float AccelY = 0;
int mOrientation = 0; //portrait\landscape

Add the variables used to store user options.

//options menu defaults
public boolean ShowBall = true;
public boolean ShowFloor = true;
public boolean ShowFountain = true;
public boolean ShowPool = true;
public boolean RotateScene = true;
public boolean UseTiltAngle = false;
public boolean MultiBillboard = true;
public boolean ShowFPS = true;
public boolean Paused = false;

Add the constructor for FountainGLRenderer. The activity is passed in so we can alter the layout and add a TextView for displaying the fps. setRenderer() tells OpenGl that this class will do the rendering and initializes the surface. We also create the listener for the accelerometer so the view angle can be adjusted based on phone tilt. Note that the accerometer returns the same X\Y values regardless of orientation so we need to choose which sensor to use.

FountainGLRenderer(Activity pActivity)

	//use FrameLayout so we can put a TextView on top of the openGL screen
	FrameLayout layout = new FrameLayout(pActivity);

	//create view for text message (fps)
	mTxtMsg = new TextView(layout.getContext());
	mTxtMsg.setBackgroundColor(0x00FFFFFF); //transparent
	mTxtMsg.setTextColor(0xFF777777); //gray

	layout.addView(this); //add openGL surface
	layout.addView(mTxtMsg); //add text view
	setRenderer(this); //initialize surface view

	//create listener for accelerometer sensor
		new SensorEventListener() {
			public void onSensorChanged(SensorEvent event) {
				//accelerometer does not change orientation 
				//so need to switch sensors
				if (mOrientation == 
					AccelY = event.values[1]; //use Y sensor
					AccelY = event.values[0]; //use X sensor
				AccelZ = event.values[2]; //Z
			public void onAccuracyChanged
				(Sensor sensor, int accuracy) {} //ignore this event

Add the onSurfaceCreated callback. This is called only once when the surface is first created. We set the background color and create the vertex arrays for our objects.

//called once
public void onSurfaceCreated(GL10 gl1, EGLConfig pConfig)
	GL11 gl = (GL11)gl1; //we need 1.1 functionality
	//set background frame color
	gl.glClearColor(0f, 0f, 0f, 1.0f); //black
	//generate vertex arrays for scene objects

Add the BuildFloor method. This generates the vertices for the triangles that make up the floor. The floor is a 7x7 grid merged with a 6x6 grid. To create a checker pattern, we only draw alternate squares. The other squares are empty. After creating the vertex array, it is stored in GPU memory.

Image 8

void BuildFloor(GL11 gl)
	//7*7+6*6 = 85 quads = 170 triangles = 510 vertices = 1530 floats[x/y/z]
	int sqrSize = 20;
	float vtx[] = new float[1530];
	int vtxCtr = 0;
	//we use the offset to produce the checkered pattern
	for (int x=-130, offset=0; x<130; x+=sqrSize, offset=sqrSize-offset)
		for (int y=-130+offset; y<130; y+=(sqrSize*2))
			//each square is 2 triangles = 6 vertices = 18 floats [x/y/z]
			vtx[vtxCtr]    = x;
			vtx[vtxCtr+ 1] =-2; //floor is 2 points below 0
			vtx[vtxCtr+ 2] = y;
			vtx[vtxCtr+ 3] = x+sqrSize;
			vtx[vtxCtr+ 4] =-2;
			vtx[vtxCtr+ 5] = y;
			vtx[vtxCtr+ 6] = x;
			vtx[vtxCtr+ 7] =-2;
			vtx[vtxCtr+ 8] = y+sqrSize;
			vtx[vtxCtr+ 9] = x+sqrSize;
			vtx[vtxCtr+10] =-2;
			vtx[vtxCtr+11] = y;
			vtx[vtxCtr+12] = x;
			vtx[vtxCtr+13] =-2;
			vtx[vtxCtr+14] = y+sqrSize;
			vtx[vtxCtr+15] = x+sqrSize;
			vtx[vtxCtr+16] =-2;
			vtx[vtxCtr+17] = y+sqrSize;

	StoreVertexData(gl, vtx, mFLOOR); //store in GPU buffer

Add the BuildBall method. The ball is created as a grid (longitude\latitude). The top portion of the method calculates all the vertices in the ball. The bottom portion arranges the vertices to generate triangles (each quad is 2 triangles). We only generate vertices for alternating quads. When we draw the ball, we will render the same vertices twice, rotating the ball and changing the color in between renders. Note that the top and bottom rows are created as quads (4 corners), even though they are rendered as triangles (3 corners). This is because every quad in the top row has the same top vertices. OpenGL ignores triangles with no area so performance is not an issue.Image 9

void BuildBall(GL11 gl)
	//need to add 1 to include last vertex
	float x[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];
	float y[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];
	float z[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];

	//create grid of vertices as if sphere was laid flat
	//start at top, go down by slice (180 degrees top to bottom)
	for (int vCtr = 0; vCtr <= mBallVSliceCnt; vCtr++)
		double vAng = 180.0 / mBallVSliceCnt * vCtr;
		float sliceRad = (float) (mBallRad * Math.sin(vAng * mDeg2Rad));
		float sliceY = (float) (mBallRad * Math.cos(vAng * mDeg2Rad));
		float vertexY = sliceY;
		float vertexX = 0;
		float vertexZ = 0;
		//go around entire sphere, 360 degrees
		for (int hCtr = 0; hCtr <= mBallHSliceCnt; hCtr++)
			double hAng = 360.0 / mBallHSliceCnt * hCtr;
			vertexX = (float) (sliceRad * Math.sin(hAng * mDeg2Rad));
			vertexZ = (float) (sliceRad * Math.cos(hAng * mDeg2Rad));
	int hCnt = x[0].length;
	int vCnt = x.length;;

	//calculate triangle vertices for each quad
	//colors are drawn separately, only create vertices for one color
	//16*8 = 128 quads = 256 triangles = 768 vertices = 2304 floats [x/y/z]
	float vtx[] = new float[mBallVSliceCnt*mBallHSliceCnt/2*2*3*3];
	int vtxCtr = 0;
	for (int vCtr = 1; vCtr < vCnt; vCtr++)
		//use %2 to create checker pattern, hCtr+=2 to skip quads
		for (int hCtr = 1+vCtr%2; hCtr < hCnt; hCtr += 2)
			vtx[vtxCtr]    = x[vCtr-1][hCtr-1];
			vtx[vtxCtr+ 1] = y[vCtr-1][hCtr-1];
			vtx[vtxCtr+ 2] = z[vCtr-1][hCtr-1];
			vtx[vtxCtr+ 3] = x[vCtr][hCtr-1];
			vtx[vtxCtr+ 4] = y[vCtr][hCtr-1];
			vtx[vtxCtr+ 5] = z[vCtr][hCtr-1];
			vtx[vtxCtr+ 6] = x[vCtr-1][hCtr];
			vtx[vtxCtr+ 7] = y[vCtr-1][hCtr];
			vtx[vtxCtr+ 8] = z[vCtr-1][hCtr];
			vtx[vtxCtr+ 9] = x[vCtr][hCtr-1];
			vtx[vtxCtr+10] = y[vCtr][hCtr-1];
			vtx[vtxCtr+11] = z[vCtr][hCtr-1];
			vtx[vtxCtr+12] = x[vCtr-1][hCtr];
			vtx[vtxCtr+13] = y[vCtr-1][hCtr];
			vtx[vtxCtr+14] = z[vCtr-1][hCtr];
			vtx[vtxCtr+15] = x[vCtr][hCtr];
			vtx[vtxCtr+16] = y[vCtr][hCtr];
			vtx[vtxCtr+17] = z[vCtr][hCtr];

	StoreVertexData(gl, vtx, mBALL); //store in GPU buffer

Add the BuildPool method. This creates the water as a triangle fan where every triangle has a common central vertex.

void BuildPool(GL11 gl)
	//center+10+end vertices = 12 vertices = 36 floats[x/y/z]
	float vtx[] = new float[(mPoolSliceCnt+2)*3];
	int vtxCtr = 0;
	//center vertex
	vtx[vtxCtr]   = 0;
	vtx[vtxCtr+1] = 4f; //6 points above floor
	vtx[vtxCtr+2] = 0;
	for (float fAngY = 0;fAngY <= 360;fAngY += 360/mPoolSliceCnt)
		//vertices that create triangle fan, first vertex is repeated (0=360)
		vtx[vtxCtr] = mPoolRad*(float)Math.sin(fAngY*mDeg2Rad); //X
		vtx[vtxCtr+1] = 4f; //Y
		vtx[vtxCtr+2] = mPoolRad*(float)Math.cos(fAngY*mDeg2Rad); //Z

	StoreVertexData(gl, vtx, mPOOL); //store in GPU buffer

Add the BuildWall method. This creates the wall of the pool as a triangle strip where every triangle shares a side with the triangle next to it. Note that the radius is set 2 points larger than the pool in order to prevent Z-fighting (triangle overlap). We will discuss Z-fighting later in this walkthrough.

void BuildWall(GL11 gl)
	int wallSliceCnt = mPoolSliceCnt; //divides nicely into 360
	float wallRad = mPoolRad+2; //2 points larger than water to prevent Z-fight
	//wall is a triangle strip
	//defines start line then each square has 2 vertices
	//startline+10 squares = 22 vertices = 66 floats[x/y/z]
	float vtx[] = new float[(wallSliceCnt+1)*2*3];
	int vtxCtr = 0;
	//start line (left side of first square)
	//bottom vertex
	vtx[vtxCtr]   = 0;
	vtx[vtxCtr+1] = -1; //bottom of wall is below 0
	vtx[vtxCtr+2] = wallRad;
	//top vertex
	vtx[vtxCtr]   = 0;
	vtx[vtxCtr+1] = 9; //wall is 10 units high
	vtx[vtxCtr+2] = wallRad;
	//rotate around fountain center
	for (float ftnAngY = 360/wallSliceCnt; 
		ftnAngY <= 360; ftnAngY += 360/wallSliceCnt)
		//right side of each square (left side is from previous square)
		//bottom vertex
		vtx[vtxCtr] = wallRad*(float)Math.sin(ftnAngY*mDeg2Rad); //X
		vtx[vtxCtr+1] = -1; //Y
		vtx[vtxCtr+2] = wallRad*(float)Math.cos(ftnAngY*mDeg2Rad); //Z
		//top vertex
		vtx[vtxCtr] = wallRad*(float)Math.sin(ftnAngY*mDeg2Rad); //X
		vtx[vtxCtr+1] = 9; //Y
		vtx[vtxCtr+2] = wallRad*(float)Math.cos(ftnAngY*mDeg2Rad); //Z

	StoreVertexData(gl, vtx, mWALL); //store in GPU buffer

Add the BuildDrop method. This creates the vertices for a single drop in the fountain. Every drop has the same coordinates. When we draw the fountain, we use glTranslate and glRotate to adjust the position\angle of each drop.

void BuildDrop(GL11 gl)
	//every drop has the same coordinates
	//we glRotate and glTranslate when drawing
	float vtx[] = {
		// X,  Y, Z
		  0f, 0f, 0,
		 -1f,-1f, 0,
		  1f,-1f, 0

	StoreVertexData(gl, vtx, mDROP); //store in GPU buffer

Add the BuildSplash method. This creates the vertices all the splash triangles. A single splash is just a ring of triangles around the drop where it hits the water. The splash triangles never move but are scaled up through the pool when drawn. We'll discuss this later in the walkthrough.

void BuildSplash(GL11 gl)
	//splashes never move
	//all splash triangles stored together
	int triCnt = 6; //must divide into 180
	int vtxCnt = mStreamCnt*9*triCnt;
	float[] vtx = new float[vtxCnt];
	int vtxCtr = 0;
	//for each stream
	for (float ftnAngY = 0;ftnAngY < 360;ftnAngY += 360/mStreamCnt)
		//get coordinates of fountain drop (end of stream)
		float dropX = mArcRad*1.5f*(float)Math.sin(ftnAngY*mDeg2Rad);
		float dropZ = mArcRad*1.5f*(float)Math.cos(ftnAngY*mDeg2Rad);
		float mid = 0; //toggle for edge\middle vertex
		int triCtr = 0;
		//get angle for triangle edges and centers
		for (float sAngY = 0;sAngY < 360;sAngY += 360/(2*triCnt))
			float realAngY = sAngY+ftnAngY; //shift angle to match stream angle
			//middle vertex have larger radius then edge vertices
			//use mid to toggle radius length
			float sX = (float)Math.sin(realAngY*mDeg2Rad)*(1+2*mid)+dropX;
			float sZ = (float)Math.cos(realAngY*mDeg2Rad)*(1+2*mid)+dropZ;
			vtx[vtxCtr] = sX;
			vtx[vtxCtr+1] = 0+mid*3; //Y, middle vertex is higher then edges
			vtx[vtxCtr+2] = sZ;
			if (mid%2==0) //edge vertex
				if (triCtr == 0) //first triangle for this drop
				{  //connect to last triangle in loop
					vtx[vtxCtr+triCnt*9-3] = sX;
					vtx[vtxCtr+triCnt*9-2] = 0; //Y
					vtx[vtxCtr+triCnt*9-1] = sZ;
				else //next triangle shares a corner
					vtx[vtxCtr+3] = sX;
					vtx[vtxCtr+4] = 0; //Y
					vtx[vtxCtr+5] = sZ;
					vtxCtr+=3; //we set 2 corners, so skip ahead
				triCtr++; //keep track of which triangle we're creating
				if (triCtr == triCnt) vtxCtr+=3; //for loop skips last vtx
			vtxCtr+=3; //next corner
			mid = 1-mid; //toggle
	StoreVertexData(gl, vtx, mSPLASH); //store in GPU buffer

Add the StoreVertexData method. This stores the vertex data for each of the objects in the GPU memory. Using the GPU memory gives us a huge performance increase because we do not need to pass the vertex data to the GPU each time we render the scene. The vertex data is stored in memory using an object index. We will use this same index when rendering the objects. We also store the buffer length which will be needed when we retrieve the data. GL_STATIC_DRAW indicates that the vertices will not be changed.

void StoreVertexData(GL11 gl, float[] pVertices, int pObjectNum)
	FloatBuffer buffer = ByteBuffer.allocateDirect
			(pVertices.length * 4) //float is 4 bytes
	 .order(ByteOrder.nativeOrder())// use the device hardware's native byte order
	 .asFloatBuffer()  // create a floating point buffer from the ByteBuffer
	 .put(pVertices);    // add the coordinates to the FloatBuffer

	(gl).glBindBuffer(GL11.GL_ARRAY_BUFFER, pObjectNum); //bind as current object
	buffer.position(0); //reset buffer position to buffer start
	//allocate memory and write buffer data
		buffer.capacity()*4, buffer, GL11.GL_STATIC_DRAW);
	(gl).glBindBuffer(GL11.GL_ARRAY_BUFFER, 0); //unbind from buffer
	mBufferLen[pObjectNum] = buffer.capacity()/3; //store for retrieval

Add the onSurfaceCreated callback. This is called after onSurfaceCreated and each time the phone orientation changes. We initialize the viewport and projection matrix. glLoadIdentity() clears any transforms or rotations we have set. We calculate the distance between the camera and the scene center so we can set the clip region. glFrustumf (discussed later) sets the parameters for the projection view. We then enable the depth test so foreground objects are drawn over background objects. We add ModelView to the matrix stack so we can draw objects using standard cartesian coordinates. Lastly, we set mOrientation with the current phone orientation.

//this is called when the user changes phone orientation (portrait\landscape)
public void onSurfaceChanged(GL10 gl, int pWidth, int pHeight)
	gl.glViewport(0, 0, pWidth, pHeight); //the viewport is the screen
	// make adjustments for screen ratio, default would be stretched square
	mScrHeight = pHeight;
	mScrWidth = pWidth;
	mScrRatio = mScrWidth/mScrHeight;

	//set to projection mode to set up Frustum
	gl.glMatrixMode(GL11.GL_PROJECTION);	// set matrix to projection mode
	gl.glLoadIdentity();	// reset the matrix to its default state
	//calculate the clip region to minimize the depth buffer range (more precise)
	float camDist = (float)Math.sqrt(mCamXpos*mCamXpos+mCamYpos*
	mClipStart = Math.max(2, camDist-185); 	//max scene radius is 185 points 
						//at corners
	//set up the perspective pyramid and clip points
		mClipStart+185+Math.min(185, camDist));

	//foreground objects are bigger and hide background objects

	//set to ModelView mode to set up objects
	mOrientation = getResources().getConfiguration().orientation;

Begin the onDrawFrame callback. We render the scene here. This is called continuously by the OpenGL system. OpenGL assumes there is constant animation requiring constant screen updates. Continuous rendering can be turned off by calling setRenderMode(RENDERMODE_WHEN_DIRTY) then calling requestRender() to render the scene.

We cast the gl1 parameter to OpenGL 1.1 so we can get the additional 1.1 functionality. This cast will fail if 1.1 is not supported by the device. According to the Android website, every Android device now supports OpenGL ES 1.1.

//this is called continuously
public void onDrawFrame(GL10 gl1)
	GL11 gl = (GL11)gl1; //we need 1.1 functionality

Add the flag check in case the user moved the camera. If the camera distance changes, we need to update the clipping region so it is aligned with the scene. onSurfaceChanged does the actual update.

if (mResetMatrix) //camera distance changed
	//recalc projection matrix and clip region
	onSurfaceChanged(gl, (int)mScrWidth, (int)mScrHeight);
	mResetMatrix = false;

Add code to clear the color and depth buffers and reset the matrix. The color and depth buffers are recalculated for each frame.

//reset color and depth buffer
gl.glLoadIdentity();   //reset the matrix to its default state

Add code to calculate the X angle based on the phone tilt. We will discuss angle calculations later in this walkthrough. AccelY and AccelZ are set in the sensor listener created in the constructor. Note that we don't let the angle pass 90 because the scene would be upside down.

if (UseTiltAngle) //use phone tilt to determine X axis angle
	//float hyp = (float)Math.sqrt(AccelY*AccelY+AccelZ*AccelZ);
	if (RotateScene) //rotate camera around 0,0,0
		//calculate new X angle
		float HypLen = (float)Math.sqrt
			(mCamXpos*mCamXpos+mCamZpos*mCamZpos); //across floor
		mSceneXAng = 90-(float)Math.atan2(AccelY,AccelZ)*(float)mRad2Deg;
		// stop at 90 degrees or scene will go upside down
		if (mSceneXAng > 89.9) mSceneXAng = 89.9f;
		if (mSceneXAng < -89.9) mSceneXAng = -89.9f;

		float HypZLen = (float)Math.sqrt(mCamXpos*mCamXpos+
				mCamYpos*mCamYpos+mCamZpos*mCamZpos); //across floor
		//HypZLen stays same with new angle
		//move camera to match angle
		mCamYpos = HypZLen*(float)Math.sin(mSceneXAng*mDeg2Rad);
		float HypLenNew = HypZLen*
				(float)Math.cos(mSceneXAng*mDeg2Rad); //across floor
		mCamZpos *= HypLenNew/HypLen;
		mCamXpos *= HypLenNew/HypLen;
	else //rotate camera
		mCamXang = (float)Math.atan2(AccelY,AccelZ)*(float)mRad2Deg - 90;
		//don't let scene go upside down
		if (mCamXang > 89.9) mCamXang = 89.9f;
		if (mCamXang < -89.9) mCamXang = -89.9f;
		ChangeCameraAngle(0, 0); //set target position

Add the gluLookAt call. This tells the OpenGL system where the camera is and its view direction. The actual values of the target variables don't matter; only the direction from the camera (if the camera is at 0,0,0 then target 1,2,3 will have the same result as target 2,4,6). The 100 value is to set the up vector. Positive Y is up in our scene so we set Y=100. It can be any positive number.

//gluLookAt tells openGL the camera position and view direction (target)
//target is 0,0,0 for scene rotate
//Y is up vector, so we set it to 100 (can be any positive number)
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY, 
		mTargetZ, 0f, 100.0f, 0.0f);

Add the code to calculate the elapsed time since the last frame was rendered. mAngCtr is set based on the time change. We do this because some frames take longer than others and we want to maintain and smooth animation. A larger time gap will result in a larger angle jump causing the animation to catch up. If the animation is paused, we stop skip the angle change. Note that onDrawFrame is still continuously called even when paused.

//use clock to adjust animation angle for smoother motion
//if frame takes longer, angle is greater and we catch up
long now = SystemClock.elapsedRealtime();
long diff = now - mLastTime;
mLastTime = now;

//if paused, animation angle does not change
if (!Paused)
	mAngCtr += diff/100.0;
	if (mAngCtr > 360) mAngCtr -= 360;

Add the call to DrawSceneObjects. This is where all the objects in our scene get drawn to the screen.


Finish the onDrawFrame method by adding the code to calculate and display the FPS (Frames Per Second). The mFrameTime array stores the frame times for the last 20 frames. To get the average frame time, we just get the time between this frame and 20 frames ago and divide by 20. The FPS display is updated every 10 frames. We will discuss this calculation in more detail later.

if (ShowFPS) //average fps across last 20 frames
	//elapsedRealtime() returns milliseconds since phone boot
	int thisFrameTime = (int)(SystemClock.elapsedRealtime()-mStartTime);
	//mFrameTime array stores times for last 20 frames
	mFPS = (mFrameTime.length)*1000f/(thisFrameTime-mFrameTime[mFramePos]);
	mFrameTime[mFramePos] = (int)(SystemClock.elapsedRealtime()-mStartTime);
	if (mFramePos < mFrameTime.length-1) //move pointer
	else //end of array, jump to start
	if (++mFPSDispCtr == 10) //update fps display every 10 frames
		SetStatusMsg(Math.round(mFPS*100)/100f+" fps"); //2 decimal places

Add the DrawSceneObjects method. All the scene objects are drawn from here. For each object (except the fountain), we set the color then call DrawObject to render the vertices for the object. For the ball, we use mAngCtr to set the current angle of rotation. We only stored vertices for half of the ball, so we rotate the ball by one slice and re-render the same vertices with a different color. For the splashes, the splash triangles were created at Y=0. We want to scale at Y=0 then move the scaled splash to the surface. The operations seem out of order here (move then scale), but it seems that openGL does some things in reverse. The mRepeatLen is used so the splash cycles with the drop movement. No billboarding is used for the splash triangles since they surround the drop. The splashes are only shown if the fountain and pool are shown.

void DrawSceneObjects(GL11 gl)
		if (ShowBall) 
			//draw first color
			gl.glColor4f(.5f, .5f, .5f, 1); //gray
			gl.glRotatef(mAngCtr, 0.0f, 1.0f, 0f); 
			DrawObject(gl, GL11.GL_TRIANGLES, mBALL);
			//rotate by one slice and draw second color
			gl.glColor4f(0.7f, 1f, 0.7f, 1f); //light green
			gl.glRotatef(mAngCtr+360f/mBallHSliceCnt, 0.0f, 1.0f, 0f);
			DrawObject(gl, GL11.GL_TRIANGLES, mBALL);
		if (ShowFountain) 
		if (ShowPool) //pool and wall
			gl.glColor4f(0.2f, 0.0f, 0.0f, 1f); //dark red
			DrawObject(gl, GL11.GL_TRIANGLE_STRIP, mWALL);
			gl.glColor4f(0.2f, 0.0f, 0.6f, 1f); //blue\red
			DrawObject(gl, GL11.GL_TRIANGLE_FAN, mPOOL);
		if (ShowFountain && ShowPool) //splashes if both
			gl.glPushMatrix(); //scale only the splash triangles
			gl.glColor4f(.9f, 0.9f, 0.9f, 1f); //off-white
			gl.glTranslatef(0, 3, 0); //move splash to pool surface
			//the splash scales up then down (3 ⇒ 0 ⇒ 3)
			//use abs value of (-3 ⇒ 0 ⇒ 3), scale Y only
			gl.glScalef(1f, Math.abs((
				mRepeatLen/2f-mAngCtr%(mRepeatLen))*0.4f), 1f);
			DrawObject(gl, GL11.GL_TRIANGLES, mSPLASH);
		if (ShowFloor) 
			gl.glColor4f(0.0f, 0.0f, 0.4f, 1f); //dark blue
			DrawObject(gl, GL11.GL_TRIANGLES, mFLOOR);

Add the DrawObject method. This renders the vertices in the GPU buffer for the specified object index. The shape type passed in (GL_TRIANGLES/GL_TRIANGLE_STRIP/GL_TRIANGLE_FAN) tells OpenGL how the vertices are organized in memory.

void DrawObject(GL11 gl, int pShapeType, int pObjNum)
	//activate vertex array type
	//get vertices for this object id
	gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, pObjNum);
	//each vertex is made up of 3 floats [x\y\z]
	gl.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
	//draw triangles
	gl.glDrawArrays(pShapeType, 0, mBufferLen[pObjNum]);
	//unbind from memory
	gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);

Add the SetStatusMsg method. This updates the TextView with new text. mTagStore is used to pass the new text to the Runnable class. If we use a non-final local variable, the compiler will complain. We need to use the Runnable class so the text update does not block the render process.

public void SetStatusMsg(String pMsg)
	//mTagStore = this. We just need an object to pass text to the anonymous method
	mTagStore.setTag(pMsg); Runnable() {
		public void run() { mTxtMsg.setText(mTagStore.getTag().toString()); }

Add the SetShowFPS method. This sets the ShowFPS flag and clears the TextView (in case ShowFPS is false).

//if user hides FPS, then clear text
public void SetShowFPS(boolean pShowFPS)
	ShowFPS = pShowFPS;
	SetStatusMsg(""); //clear message

Add the SwapCenter method. This alternates the rotation center between camera and scene. If the rotation is set to scene, the camera always looks at the scene center (0,0,0) and the camera moves around the center (we don't actually rotate the scene). If the rotation is set to camera, the camera turns and we move the view target.

//rotate scene or rotate camera
public void SwapCenter()
	RotateScene = !RotateScene;
	if (RotateScene) //rotate around fountain
		//calculate scene angles based on camera position
		//hypotenuse using 2 dimensions
		float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
				mCamZpos*mCamZpos); //across floor
		mSceneYAng = (float)Math.atan2(mCamXpos,mCamZpos)*(float)mRad2Deg;
		//3rd dimension
		mSceneXAng = (float)Math.atan2(mCamYpos,hypLen)*(float)mRad2Deg;

		mTargetX = mTargetY = mTargetZ = 0; //camera always looks at 0,0,0
	else //rotate camera
		//camera angle is reverse of scene angle
		mCamYang = mSceneYAng+180;
		mCamXang = -mSceneXAng;
		ChangeCameraAngle(0,0); //set camera view target

Add the ChangeSceneAngle method. This is called when the RotateScene flag is set and the user rotates the view. We move the camera around the center of the scene (0,0,0) keeping the same distance. We will discuss angle calculations later in this walkthrough.

//rotate camera around fountain
void ChangeSceneAngle(float pChgXang, float pChgYang)
	//hypotenuse using 2 dimensions
	float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
			mCamZpos*mCamZpos); //across floor
	//process X and Y angles separately
	if (pChgYang != 0)
		mSceneYAng += pChgYang;
		if (mSceneYAng < 0) mSceneYAng += 360;
		if (mSceneYAng > 360) mSceneYAng -= 360;
		//move camera according to new Y angle
		mCamXpos = hypLen*(float)Math.sin(mSceneYAng*mDeg2Rad);
		mCamZpos = hypLen*(float)Math.cos(mSceneYAng*mDeg2Rad);

	if (pChgXang != 0)
		//hypotenuse using all 3 dimensions
		float hypZLen = (float)Math.sqrt
			(hypLen*hypLen+mCamYpos*mCamYpos); // 0,0,0 to camera
		mSceneXAng += pChgXang;
		if (mSceneXAng > 89.9) mSceneXAng = 89.9f;
		if (mSceneXAng < -89.9) mSceneXAng = -89.9f;
		//hypZLen stays same with new angle
		//move camera according to new X angle
		mCamYpos = hypZLen*(float)Math.sin(mSceneXAng*mDeg2Rad);
		float HypLenNew = 
		    hypZLen*(float)Math.cos(mSceneXAng*mDeg2Rad); //across floor
		mCamZpos *= HypLenNew/hypLen;
		mCamXpos *= HypLenNew/hypLen;

Add the ChangeCameraAngle method. This is called when the RotateScene flag is not set and the user rotates the view. We rotate the camera around its center point. We then update the camera target view point based on the updated angle. The distance between the camera and the target remains constant.

//change camera view direction
void ChangeCameraAngle(float pChgXang, float pChgYang)
	mCamXang += pChgXang;
	mCamYang += pChgYang;
	//keep angle within 360 degrees
	if (mCamYang > 360) mCamYang -= 360;
	if (mCamYang < 0) mCamYang += 360;
	//don't let view go upside down
	if (mCamXang > 89.9) mCamXang = 89.9f;
	if (mCamXang < -89.9) mCamXang = -89.9f;
	// move view target according to new angles
	mTargetY = mCamYpos+mViewRad*(float)Math.sin(mCamXang*mDeg2Rad);
	mTargetX = mCamXpos+mViewRad*(float)Math.cos(mCamXang*mDeg2Rad)*
	mTargetZ = mCamZpos+mViewRad*(float)Math.cos(mCamXang*mDeg2Rad)*

Add the MoveCamera method. This called when the camera moves forward or backward. If the RotateScene flag is set, the camera always moves toward\away from the scene center (0,0,0). It can never pass the center. If RotateScene is not set, the camera moves towards\away from the camera target and the target is adjusted to match (distance to target stays constant). We set mResetMatrix to true so the clip region is updated during the next frame render.

void MoveCamera(float pDist)
	//move camera along line of sight toward target vertex
	if (RotateScene) //move towards\away from 0,0,0
		//distance from 0,0,0
		float curdist = (float)Math.sqrt(
				mCamXpos*mCamXpos +
				mCamYpos*mCamYpos +
		//if camera will pass center than reduce distance
		if (pDist < 0 && curdist + pDist < 0.01) //can't go to exact center
			pDist = 0.01f-curdist;//0.01 closest distance
		float ratio = pDist/curdist;
		float chgCamX = (mCamXpos)*ratio;
		float chgCamY = (mCamYpos)*ratio;
		float chgCamZ = (mCamZpos)*ratio;
		mCamXpos += chgCamX;
		mCamYpos += chgCamY;
		mCamZpos += chgCamZ;
	else //move towards\away from target
		//mViewRad is 100, so do percentage
		float ratio = pDist/mViewRad;
		float chgCamX = (mCamXpos-mTargetX)*ratio;
		float chgCamY = (mCamYpos-mTargetY)*ratio;
		float chgCamZ = (mCamZpos-mTargetZ)*ratio;
		mCamXpos += chgCamX;
		mCamYpos += chgCamY;
		mCamZpos += chgCamZ;
		mTargetX += chgCamX;
		mTargetY += chgCamY;
		mTargetZ += chgCamZ;

	mResetMatrix = true; //recalc depth buffer range

Add the onTouchEvent callback. This is called when the user touches the screen or drags across it. If the user touches and releases without dragging (drag 5 pixels or less), we assume it's a tap and move the camera forward\backward. If the user drags, we update the view angle based on drag distance. Tapping at the top of screen moves the camera forward. Tapping at the bottom moves the camera back.

public boolean onTouchEvent(final MotionEvent pEvent)
	if (pEvent.getAction() == MotionEvent.ACTION_DOWN) //start drag
		//store start position
		mDragStartX = pEvent.getX();
		mDragStartY = pEvent.getY();
		mDownX = pEvent.getX();
		mDownY = pEvent.getY();
		return true; //must have this
	else if (pEvent.getAction() == MotionEvent.ACTION_UP) //drag stop
		//if user did not move more than 5 pixels, assume screen tap
		if ((Math.abs(mDownX - pEvent.getX()) <= 5) && 
			(Math.abs(mDownY - pEvent.getY()) <= 5))
			if (pEvent.getY() < mScrHeight/2.0) //top half of screen
				MoveCamera(-5); //move camera forward
			else if (pEvent.getY() > 
				mScrHeight/2.0) //bottom half of screen
				MoveCamera(5); //move camera back
		return true; //must have this
	else if (pEvent.getAction() == MotionEvent.ACTION_MOVE) //dragging
		//to prevent constant recalcs, only process after 5 pixels
		//if user moves less than 5 pixels, we assume screen tap, not drag
		//we divide by 3 to slow down scene rotate
		if (Math.abs(pEvent.getX() - mDragStartX) > 5) //process Y axis rotation
			if (RotateScene) //rotate around fountain
				(mDragStartX - pEvent.getX())/3f); //Y axis
			else //rotate camera
				(mDragStartX - pEvent.getX())/3f); //Y axis
			mDragStartX = pEvent.getX();
		if (Math.abs(pEvent.getY() - 
			mDragStartY) > 5) //process X axis rotation
			if (RotateScene) //rotate around fountain
					(pEvent.getY() - mDragStartY)/3f, 0); //X axis
			else //rotate camera
					(mDragStartY - pEvent.getY())/3f, 0); //X axis
			mDragStartY = pEvent.getY();
		return true; //must have this
	return super.onTouchEvent(pEvent);

Add the DrawFountain method. This calculates the billboard angle at 0,0,0 and calculates the position of each drop. We assume each drop travels in an arc so we just divide the arc (180 degrees) by the drop count and use that as the drop position. Each drop only travels a short distance (mRepeatLen) the repeats. mAngCtr (set in onDrawFrame) is used to increase the angle offset each frame, creating the animation. You could add some randomness here so each drop has a slightly different path, but for now, all the drops will follow the same arc.

void DrawFountain(GL11 gl)
	//get billboard angles for 0,0,0
	//calculate angle from 0,0,0 to camera, used if single billboard
	float angY = 270-(float)Math.atan2(mCamZpos,mCamXpos)*
			(float)mRad2Deg; //around Y axis

	float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
			mCamZpos*mCamZpos); //across floor
	float angX = (float)Math.atan2(mCamYpos,hypLen)*(float)mRad2Deg; //X axis

	int dropCtr = 0;
	//rotate around fountain center
	for (float ftnAngY = 0;ftnAngY < 360;ftnAngY += 360/mStreamCnt)
		//draw each arc
		//arcAng will cycle through single segment and repeat
		float arcAng = mAngCtr%(mRepeatLen);
		for (;arcAng < 180;arcAng += mRepeatLen)
			//default arc is half circle
			//use 0.75 to reduce arc width
			float dropRad = 0.75f*(mArcRad-mArcRad*
			//use 1.5 to increase arc height
			dropCoords[dropCtr][1] = 1.5f*mArcRad*
				(float)Math.sin(arcAng*mDeg2Rad); //Y
			dropCoords[dropCtr][0] = dropRad*
				(float)Math.sin(ftnAngY*mDeg2Rad); //X
			dropCoords[dropCtr][2] = dropRad*
				(float)Math.cos(ftnAngY*mDeg2Rad); //Z
	gl.glColor4f(0.5f, 0.5f, 1f, 1f); //light blue
	DrawDropTriangles(gl, angX, angY, dropCoords); //draw all triangles at once

Add the DrawDropTriangles method. This renders each drop as a separate triangle. The pDropCoords array only has the top vertex of each triangle. If the MultiBillboard flag is set, we recalculate the billboard angle for each drop so each drop appears to be a perfect triangle facing the camera. If MultiBillboard is not set, we just use the billboard angle for (0,0,0). We will discuss billboarding later.

//each triangle has the same dimensions, only location and rotation are different
void DrawDropTriangles(GL11 gl, float pAngX, float pAngY, float[][] pDropCoords)
	//DropCoords array only contains top vertex of each drop triangle
	//for each triangle, just translate to top vertex and redraw 
	//same triangle each time
	int TriCnt = pDropCoords.length; //triangle count
	// initialize vertex Buffer for triangle
	gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, mDROP);
	gl.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);

	for (int ctr = 0;ctr < TriCnt;ctr++)
		gl.glPushMatrix(); //translate\rotate only affects this single triangle
			pDropCoords[ctr][0], pDropCoords[ctr][1],pDropCoords[ctr][2]);
		if (MultiBillboard) //calc each triangle billboard angle separately
			float hypLen = 0;
			float distX = mCamXpos-pDropCoords[ctr][0];
			float distY = mCamYpos-pDropCoords[ctr][1];
			float distZ = mCamZpos-pDropCoords[ctr][2];

			//hypotenuse in 2D
			hypLen = 
				(float)Math.sqrt(distX*distX+distZ*distZ); //across floor
			pAngY = 270-(float)Math.atan2(distZ,distX)*(float)mRad2Deg;
			//3rd dimension
			pAngX = (float)Math.atan2(distY,hypLen)*(float)mRad2Deg;
		gl.glRotatef(pAngY, 0, 1, 0);
		gl.glRotatef(pAngX, 1, 0, 0);
		gl.glDrawArrays(GL11.GL_TRIANGLES, 0, mBufferLen[mDROP]); //single drop
		gl.glPopMatrix(); //done with this triangle
	gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0); //unbind from buffer

Add the ShowMaxDepthBits method. This method will determine the maximum size of the depth buffer for your device. It is not called in our application, but can be useful for testing.

void ShowMaxDepthBits() //resolution of depth buffer
	EGL10 egl = (EGL10)EGLContext.getEGL();
	EGLDisplay dpy = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
	EGLConfig[] conf = new EGLConfig[100]; //buffer for surface configs
	//get all possible configs for this OpenGL surface
	egl.eglGetConfigs(dpy, conf, 100, null);
	int maxBits = 0;
	int[] value = new int[1]; //for return value
	//scan all possible configs for maximum depth bit count
	for(int i = 0; i < 100 && conf[i] != null; i++)
		//get depth bit size for this config
		egl.eglGetConfigAttrib(dpy, conf[i], EGL10.EGL_DEPTH_SIZE, value);
		maxBits = value[0]>maxBits ? value[0] : maxBits;
	SetStatusMsg("DepthBits "+maxBits); //display

Finish the FountainGLRenderer class with two test methods. These were used during testing but are no longer called by the application. They may be useful for debugging or adding additional objects to the scene. For maximum performance, it is better to use the StoreVertexData\DrawObject methods, though that requires a more complicated setup.

	//utility function for drawing a square
	void DrawQuad(GL11 gl, float[] pX, float[] pY, 
		float[] pZ) //clockwise starting top left
		float[] vtx = new float[12];
		int i = 0;
		vtx[i++]=pX[0]; vtx[i++]=pY[0]; vtx[i++]=pZ[0];
		vtx[i++]=pX[1]; vtx[i++]=pY[1]; vtx[i++]=pZ[1];
		vtx[i++]=pX[3]; vtx[i++]=pY[3]; vtx[i++]=pZ[3];
		vtx[i++]=pX[2]; vtx[i++]=pY[2]; vtx[i++]=pZ[2];

		FloatBuffer buffer;
		ByteBuffer vbb = 
			ByteBuffer.allocateDirect(vtx.length * 4); //float is 4 bytes
		//use the device hardware's native byte order
		//create a floating point buffer from the ByteBuffer
		buffer = vbb.asFloatBuffer(); 	
		buffer.put(vtx); //add the coordinates to the FloatBuffer
		buffer.position(0); //set the buffer to read the first coordinate
		//3 values per vertex [x/y/z]
		gl.glVertexPointer(3, GL11.GL_FLOAT, 0, buffer); 
		gl.glDrawArrays(GL11.GL_TRIANGLE_STRIP, 0, 4); //4 vertices

	//draw single point
	void DrawPoint(GL11 gl, float pVertexX, float pVertexY, float pVertexZ)
		FloatBuffer buffer;
		float[] vtx = new float[3];
		int i=0;
		vtx[i++]=pVertexX; vtx[i++]=pVertexY; vtx[i++]=pVertexZ;
		ByteBuffer vbb = 
			ByteBuffer.allocateDirect(vtx.length * 4); //float is 4 bytes
		//use the device hardware's native byte order
		//create a floating point buffer from the ByteBuffer
		buffer = vbb.asFloatBuffer(); 	
		buffer.put(vtx); //add the coordinates to the FloatBuffer
		buffer.position(0); //set the buffer to read the first coordinate
		//3 values per vertex [x/y/z]
		gl.glVertexPointer(3, GL11.GL_FLOAT, 0, buffer); 
		gl.glDrawArrays(GL11.GL_POINTS, 0, 1); //only one point

Coding the FountainGLActivity Class

This is the class that gets used when the application first starts. For our application, it is used to create the FountainGLRenderer class and process the options menu.


Remove all the existing code from this file.

Add the package name and imports needed for the Activity.

package droid.fgl;

import droid.fgl.FountainGLRenderer;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.view.Window;
import android.view.WindowManager.LayoutParams;

Begin the FountainGLActivity class and add two variables. mRenderer will be a pointer the FountainGLRenderer instance and mMenuList will be used to store the items of the options menu.

public class FountainGLActivity extends Activity
	FountainGLRenderer mRenderer = null;
	MenuItem[] mMenuList = new MenuItem[10]; //options menu

Add the onCreate callback. This is called when the application first starts and when the phone changes orientation (Portrait\Landscape). First, we set the application to full screen and disable the screensaver, then call the parent constructor. We create the FountainGLRenderer instance passing the instance of the Activity. We then load the user preferences. If the preferences are not available, the defaults are used. We then call SwapCenter twice to ensure that the camera and scene angles are set properly.

 public void onCreate(Bundle savedInstanceState) {
	requestWindowFeature(Window.FEATURE_NO_TITLE); //hide title bar
	getWindow().setFlags(0xFFFFFFFF, //hide status bar and keep phone awake


	//onCreate is called when phone orientation changes
	//no need to recreate render class
	if (mRenderer == null)
		mRenderer = new FountainGLRenderer(this); //openGL surface

	//retrieve options
	SharedPreferences sp = getSharedPreferences("FountainGL", 0);
	mRenderer.ShowBall = sp.getBoolean("ShowBall", mRenderer.ShowBall);
	mRenderer.ShowFountain = sp.getBoolean("ShowFountain", mRenderer.ShowFountain);
	mRenderer.ShowFloor = sp.getBoolean("ShowFloor", mRenderer.ShowFloor);
	mRenderer.ShowPool = sp.getBoolean("ShowPool", mRenderer.ShowPool);
	mRenderer.ShowFPS = sp.getBoolean("ShowFPS", mRenderer.ShowFPS);
	mRenderer.UseTiltAngle = sp.getBoolean("UseTiltAngle", mRenderer.UseTiltAngle);
	mRenderer.RotateScene = sp.getBoolean("RotateScene", mRenderer.RotateScene);
	//calculate angle and position of camera

Add the onPrepareOptionsMenu callback. This is called each time the menu is shown so we can change the menu as needed. All of the user options are boolean toggles, so we just set each menu option based on the current toggle setting. Note that the menu can only hold 5 items, so the last five items will go to the overflow menu (user must click More). The first five items should be the most used.

//this method called every time menu is shown
public boolean onPrepareOptionsMenu(Menu menu)
	menu.clear(); //reset menu
	//set menu items based on current settings
	mMenuList[0] = menu.add((mRenderer.ShowBall?"Hide":"Show")+" Ball");
	mMenuList[1] = menu.add((mRenderer.ShowFloor?"Hide":"Show")+" Floor");
	mMenuList[2] = menu.add((mRenderer.ShowFountain?"Hide":"Show")+" Fountain");
	mMenuList[3] = menu.add((mRenderer.ShowPool?"Hide":"Show")+" Pool");
	mMenuList[4] = menu.add("Rotate "+(mRenderer.RotateScene?"Camera":"Scene"));
	mMenuList[5] = menu.add("Use "+(mRenderer.UseTiltAngle?"Touch":"Tilt")+" Angle");
	mMenuList[6] = menu.add((mRenderer.MultiBillboard?"Single":"Multi")+" Billboard");
	mMenuList[7] = menu.add((mRenderer.ShowFPS?"Hide":"Show")+" FPS");
	mMenuList[8] = menu.add(mRenderer.Paused?"Unpause":"Pause");
	mMenuList[9] = menu.add("Exit");
	return super.onCreateOptionsMenu(menu);

Finish off the FountainGLActivity class by adding the onOptionsItemSelected callback. This called when the user chooses a menu item. For the RotateScene option, we call SwapCenter because we need to recalculate the camera or scene angles when the center of rotation changes. For the other options, we just toggle the current setting. For Exit, finish is called to close the application. After changing the setting, the settings are persisted so they will be the same for the next application run.

	//listener for menu item clicked
	public boolean onOptionsItemSelected(MenuItem item)
		if (item == mMenuList[0]) //Show\Hide Ball
			mRenderer.ShowBall = !mRenderer.ShowBall;
		else if (item == mMenuList[1]) //Show\Hide Floor
			mRenderer.ShowFloor = !mRenderer.ShowFloor;
		else if (item == mMenuList[2]) //Show\Hide Fountain
			mRenderer.ShowFountain = !mRenderer.ShowFountain;
		else if (item == mMenuList[3]) //Show\Hide Pool
			mRenderer.ShowPool = !mRenderer.ShowPool;
		else if (item == mMenuList[4]) //Rotate Camera\Scene
		else if (item == mMenuList[5]) //Use Touch\Tilt Angle
			mRenderer.UseTiltAngle = !mRenderer.UseTiltAngle;
		else if (item == mMenuList[6]) //Single\Multi Billboard
			mRenderer.MultiBillboard = !mRenderer.MultiBillboard;
		else if (item == mMenuList[7]) //Show\Hide FPS
		else if (item == mMenuList[8]) //Pause\Unpause
			mRenderer.Paused = !mRenderer.Paused;
		else if (item == mMenuList[9]) //Exit

		//store options
		getSharedPreferences("FountainGL", 0).edit()
		 .putBoolean("ShowBall", mRenderer.ShowBall)
		 .putBoolean("ShowFountain", mRenderer.ShowFountain)
		 .putBoolean("ShowPool", mRenderer.ShowPool)
		 .putBoolean("ShowFloor", mRenderer.ShowFloor)
		 .putBoolean("ShowFPS", mRenderer.ShowFPS)
		 .putBoolean("UseTiltAngle", mRenderer.UseTiltAngle)
		 .putBoolean("RotateScene", mRenderer.RotateScene)
		return super.onOptionsItemSelected(item);

And that finishes off the application code, now we're ready to run the application and see the scene we created.

Build the project (Project->Build All). If you have Build Automatically set, the project will rebuild each time you save a source file.

Running the App

In Eclipse, press Ctrl-F11 to start the application (or F11 to debug).

After a few seconds (if everything goes right), the application should start on the virtual device (or your phone if it's attached).

To change the orientation of the virtual device, use keypad 9 (NumLock must be off). To test the phone tilt functionality, you will need to use your actual phone. The virtual device does not tilt.

To exit the app, use the back button on your phone (or Exit) or choose Run->Terminate in Eclipse.

To install the application to your phone using an APK file.

On your phone, in Settings->Applications, enable Unknown sources to allow non-market apps on your phone.

In Eclipse, choose File->Export..->Android-> Export Android Application.

Image 10

Click Next.

Enter FountainGL as the project name.

Image 11

Click Next.

If you already have a keystore, choose Use existing keystore. If not, here are the steps to create one.

Choose Create new keystore. Enter a file name (no extension is needed) and a password.

Image 12

Click Next.

For Alias and Password, you can use the same values you entered into the previous screen. Set validity to 100 years. Enter any name in the Name field. If you plan to publish any apps using this keystore, you should probably use your real information.

Image 13

Click Next.

Enter the file name for your apk file.

Image 14

Click Finish.

To install the apk file onto your phone, use the adb tool in the android-sdk\platform-tools folder. If you don't know the folder, just search your computer for adb.exe.

To install the apk file, use this command line:

adb install C:\FountainGL.apk

You can also use one of the (free) installer apps from the Android market which lets you install apk files from the phone's SD card.

Once the install is complete, FountainGL should be available in your phones application list.

Congratulations on your new application. Be sure to test the options and see how the fps is affected and the affect of billboarding.

The remainder of this walkthrough discusses some of the concepts used in this application

Calculating Angles and Coordinates

For those of us that haven't touched geometry since high school, here's a quick refresher. I've abbreviated arccos\arcsin\arctan to acos\asin\atan to match the Java functions.

Image 15

    Given a right triangle:

    x=h*cos(θ)    θ=acos(x/h) 

The atan2 function

The above equations used to compute x and y are accurate for the full 360 degrees. The functions used to compute the angles (axxx) are only accurate for 180 degrees. The other 180 degrees will produce the same angles.

Consider the diagram below:

Image 16

    Here we have 2 angles, 45 and 225 degrees. If we compute the coordinates from the angles, the results are correct:

h = √(52+52) = 7.07

x = 7.07*cos(45) = 5 
y = 7.07*sin(45) = 5 

x = 7.07*cos(225) = -5 
y = 7.07*sin(225) = -5 

If we compute the angles from the coordinates, we run into a problem:
θ = acos(5/7.07) = 45   Correct
θ = acos(-5/7.07) = 135   Wrong! We want 225 (or -135).

This is because only one coordinate sign is used in the formula. The other vaiable used is the hypotenuse (h) which is always positive. If we try using atan, the same issue occurs: atan(5/5) = atan(-5/-5)

We could solve this by adding a check in our code:
if (y<0) Angle = -Angle;

Fortunately, most programming languages include the Atan2 function to solve this exact issue. Atan2 considers both coordinate signs when computing the angle:
θ = atan2(y,x) 
θ = atan2(5,5) = 45   Correct
θ = atan2(-5,-5) = -135   Correct

Note that in Excel, the ATAN2 function has the parameters reversed (x,y).

Working in 3D

The scene in our OpenGL program is based in 3D, so we need to compute angles and coordinates in 3 dimensions.

Image 17

    Here is the process to calculate the scene angles from the camera coordinates.

β = atan2(cz, cx)
h = √(cx2 + cy2)
α = atan2(cy, h) 
hz = √(cx2 + cy2 + cz2)

To calculate the camera coordinates from the scene angles (and hz), we just reverse the process.

h = hz * cos(α) 
cx = h * sin(β) 
cz = h * cos(β) 

When we rotate the camera, the calculations are the same except that the camera is at the center and the camera target moves around the camera.

Note that in Java, these math functions compute the angle in radians where PI (3.141592) radians = 180 degrees.

Also note that in the diagram, the Z axis points along the floor. This is because the android screen (the camera) is viewing the scene from the side and in OpenGL, the Z axis goes through the screen.

Vertex Sequencing

When coordinates (vertices) are stored in the GPU buffer, they can be organized in several ways to create different shapes. All the shapes consist of triangles, and some triangles can share vertices allowing for reduced storage and faster rendering. OpenGL will render the coordinates based on the constant passed in the glDrawArrays call. In the FountainGL project, we used three types of vertex sequences.

Image 18  Image 19  Image 20

GL_TRIANGLE_STRIP is used when each triangle shares a side with the triangle next to it. This sequence was used to create the pool wall in our application.

GL_TRIANGLE_FAN is used when each triangle shares a common central vertex. This sequence was used to create the pool water in our application.

GL_TRIANGLES is used when creating triangles that are not attached to each other so nothing is shared. This requires the most storage and rendering time of the three sequence types we used. This sequence was used to create the floor, ball, and fountain drops in our application.


Billboarding is a way to make 2D objects appear 3D. This increases performance because the OpenGL engine does not need to render a complete 3D object. For example, a ball looks just like a circle facing the camera and the circle is rendered much faster. The trick to billboarding is rotating the 2D object so it always faces the camera and appears the same as a 3D object.
In our program, we implemented billboarding two ways: Single billboard and Multi billboard.

Single Billboard

Here we calculate the billboard angle at the center of the fountain (0,0,0) to the camera then use that same angle for every fountain drop.

We can render the fountain faster because we only need to calculate the billboard angle once. From a distance, things look okay, but close up, our shortcut becomes obvious. The drops are rotated away from the camera and they no longer appear as triangles.

Distance    Close up
 Image 21   Image 22   Image 23   Image 24

Multi Billboard

Here we calculate the billboard angle for every drop which increases render time. From a distance the scene looks nearly identical to the single billboard render, yet when close up it is noticeably better. The drops are facing the camera and appear as full triangles.

Distance    Close up
 Image 25   Image 26   Image 27   Image 28

In a scene where the fountain is always in the background, the single billboard method would suffice and improve render time. Since our application allows the camera to get close to the fountain, we give the user the multi-billboard option.


Image 29 As requested by ErrolErrol, splashes were added to the scene. The splashes are created by using a ring of triangles around the splash point.

To create the triangle vertices, we just go around the drop point and calculate the coordinates of each triangle vertex. We are using 6 triangles, so we divide the circle by 12. For odd steps, we calculate the triangle edge vertex using a smaller radius. For even steps, we calculate the middle vertex of the triangle using a larger radius. The middle vertex is also higher (on Y axis) than the edge vertices, so the triangle points up from the pool surface.

By creating triangles that angle up, we can create the splash affect by scaling the triangles on the Y axis:
gl.glScalef(1f, Math.abs((mRepeatLen/2f - mAngCtr%(mRepeatLen)) * 0.4f), 1f);
If mRepeatLen is 10, then the scale factor goes from 5 ⇒ 0 ⇒ 5 (we take the abs value of -5 ⇒ 0 ⇒ 5). We only scale on the Y axis so the splashes get taller, not wider. The mAngCtr is used so we stay in sync with the drop cycle.

All the splash triangles for the entire scene are stored together and drawn at the same time. No billboarding is used when drawing the splashes because the splashes look okay from most angles and we save on CPU time.

Perspective and glFrustumf


In our application, we used the glFrustumf method to set up the perspective for the camera. The perspective is basically the field (or angle) of view for the camera. A larger FOV allows the camera to see more of the scene, but objects appear smaller and the size difference between close and far objects is more pronounced. You can think of it as putting a wide angle lens on your camera. A smaller FOV has the opposite effect; the camera can see less of the scene and the size change between near and far objects is less significant. This is the same affect produced by using a zoom lens on your camera.

In these two screen captures, the scene angles are the same, but the difference in FOV creates noticeably different views.

Image 30    Image 31
Frustum Length = 1
Large FOV
    Frustum Length = 2
Small FOV
Image 32    Image 33


The glFrustumf call uses 5 parameters to set up the perspective (We'll discuss zFar in a moment). These parameters define the pyramid (frustum) of the perspective.

  glFrustumf(left, right, bottom, top, zNear, zFar)

Image 34

When creating the perspective, the shape of the pyramid is important, not the size. As long as the ratios are the same, the perspective is the same:

  glFrustumf(-2, 2, -4, 4, 100, 500)

creates the same perspective as:

  glFrustumf(-4, 4, -8, 8, 200, 500)

The difference between these two commands is the clipping region. zNear helps determine the shape of the perspective, but it also indicates the near clipping region. Any pixels that are closer to the camera than this line are not shown. Any pixels that are farther than the zFar clipping region are also not shown. zNear cannot be zero or negative.

The Depth Buffer

When OpenGL renders a scene, it uses a depth buffer to sort the pixels according to distance from camera. Once the pixels are sorted, OpenGL will render them far to near so closer objects will hide far objects (OpenGL can also skip pixels if it knows they will be hidden).

The depth buffer consists of buckets from zNear to zFar and all the pixel regions in the scene will go into one of these buckets. The buckets are then rendered far to near. Pixels in the same bucket are considered equal distance from the camera and will be rendered as a single plane. There are always the same number of buckets and they are divided into the clipping region (zNear to zFar). A large clipping region will have the same number of buckets as a small clipping region, but the buckets will be bigger.

The precision of the depth buffer (number of buckets) can vary between devices. My Huawei has a 16 bit buffer, which indicates 65,536 buckets. Some devices will have a 24 or 32 bit buffer, which would provide more accuracy.


It's important to know that the buckets of the buffer are not equally sized. The buckets are much more dense (smaller buckets) at zNear and spread out at zFar. This is so objects close to the camera will have more precision and less risk of pixel overlap. The overlap problem is called z-fighting.

Here are two screen captures from the FountainGL application running on the emulator. The camera is under the fountain looking up and the pool is 6 units above the floor.

Clip Region = 300
glFrustumf(-1, 1, -1, 1, 1, 300)
  Clip Region = 1000
glFrustumf(-1, 1, -1, 1, 1, 1000)
Image 35    Image 36

As you can see, the image on the right looks incorrect. It looks like the pool is falling through the floor. The issue is that the clipping region is so large (1000), the buckets are larger and pixels which are close together are falling into the same bucket and being rendered on the same plane. The left image looks correct because the clipping region is much smaller (300) creating smaller buckets and better depth resolution.

Bucket sizes

As mentioned previously, the bucket size is quite small near the camera (zNear) and quite large in the distance (zFar). Bucket size increases exponentially as distance from the camera increases. If we set zNear to 1 and zFar to 100, here are the relative bucket sizes at 10 unit increments.

 Image 37

The first bucket is so small it doesn't even show on the bar. The last bucket, covering .0015 units, is 10,000 times larger than the first, which covers a tiny .00000015 units. For a 16 bit depth buffer, there will be 65,536 (2^16) buckets.

As you can see from the graph, the scene will lose depth resolution quickly as objects move away from zNear. When creating a scene, the goal is to keep objects close to zNear and keep the clipping region (zFar-zNear) as small as possible.

Shifting the clipping region

Unfortunately, our application allows the camera to move around the entire scene and view the fountain from any distance. If we use 300 for the clip region, the scene would begin to clip as soon as the camera moves back and using 1000 would cause excessive z-fighting. To get around this problem, we move the clipping region when the camera moves forward or back so the clipping region length (and depth resolution) remains constant.

  Near Clipping Region

Image 38

  Far Clipping Region

Image 39

Multipass Rendering

In some cases, the scene that is being rendered is large and we don't want to sacrifice depth resolution to render the scene properly. This is where multipass rendering comes in. This is when you render the scene in chunks starting with distant objects and ending with nearby objects. Each chunk will use a separate depth buffer so each chunk will be more accurately rendered (less z-fighting). The cost of this is the additional processing time to render the full scene.

  Render far objects using far clipping region.

Image 40

  Reset the depth buffer then render near objects using near clipping region.

Image 41

  Complete scene created with separate depth buffers.

Image 42

If you want to test multipass rendering in the FountainGL application, comment out the existing calls to glMatrixMode (both of them) and DrawSceneObjects then insert this code in onDrawFrame right after the gluLookAt call. If you want to see a gap between render regions, set the glFrustumf far clip region to 98 in the bottom code block. In this scene, all the objects have the same center point so we're actually rendering the same objects twice (pixels will be clipped according to each clipping region).

//=== Multipass Render ===
//remove other calls to glMatrixMode and DrawSceneObjects
// --- Draw far objects ---
//set clip region for 100 - 500 units from camera
gl.glFrustumf(-mScrRatio*100, mScrRatio*100, -1f*100, 1f*100, 1f*100, 500);
gl.glLoadIdentity();   // reset the matrix to its default state
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY, 
		mTargetZ, 0f, 100.0f, 0.0f);
DrawSceneObjects(gl); // <----- Far objects

// --- Draw near objects ---
//set clip region for 1 - 100 units from camera
gl.glFrustumf(-mScrRatio, mScrRatio, -1f, 1f, 1f, 100); //set to 98 for gap
gl.glLoadIdentity();   // reset the matrix to its default state
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY, 
		mTargetZ, 0f, 100.0f, 0.0f);
DrawSceneObjects(gl); // <----- Near objects

Calculating FPS (Frames Per Second)

In the FountainGL application, the FPS is the average render time for the last 20 frames. This is done by storing the end time of each frame in an array. After 20 frames, we take the frame end time of the current frame, subtract the end time of the first frame (frame 1), then divide into 20. The FPS result will not be correct until the application runs for 20 frames.

For the sake of simplicity, let's assume we are calculating based on 10 frames. For this example, we'll assume every frame takes 5 seconds (it would go much faster in real life).

  At application start, there is no frame data in the frame array and the frame pointer is pointing to slot 0.

Frame Ptr
Array Slot0123456789
Frame Time 0  0  0  0  0  0  0  0  0  0 

  After 5 frames, we have populated 5 frames of data and shifted the pointer at each frame. The first frame ended at boottime+100 seconds. Each frame takes 5 seconds. The FPS calculation is still wrong because of the zero entries.

Frame Ptr
Array Slot0123456789
Frame Time100105110115120 0  0  0  0  0 

  After 9 frames, we have populated 9 frames of data.

Frame Ptr
Array Slot0123456789
Frame Time100105110115120125130135140 0 

  After 10 frames, we have populated the entire array. The FPS calculation will be correct now. The current frame will be at 150 seconds, so the FPS average will be 10/(150-100) = .2 frames per second. After the FPS calculation, we set the value at the frame pointer to the current frame time so slot 0 will be set to 150.

Frame Ptr
Array Slot0123456789
Frame Time100105110115120125130135140145

  After 15 frames, we have wrapped around the array, but the frame pointer is still correctly pointing to 10 frames ago. The FPS average will be 10/(175-125) = .2 frames per second. After the FPS calculation, we set the value at the frame pointer to the current frame time so slot 5 will be set to 175.

Frame Ptr
Array Slot0123456789
Frame Time150155160165170125130135140145

As noted previously, the actual code uses 20 frames, but we use 10 here to save some space. In the application, the FPS value is displayed every 10 frames. If you get a high FPS on your device, you may want to increase the frame count so the FPS display doesn't become a blur of digits.

Additional Thoughts

  • The fountain drops dramatically increase render time. I don't see a way around this since all the drops move and rotate at every frame.
  • There is probably a more efficient way to do the multipass render. This application does not really benefit from it since all the objects have the same Y axis.
  • The emulator has terrible depth precision. There was always z-fighting. My phone did much better once the clip region shifting was implemented.
  • Using the VBO (GPU memory) for storing vertices gave a impressive performance boost. If rendering just the floor, the FPS doubled when compared to using main memory buffers.
  • The bucket size chart is accurate based on this site. I used Excel to calculate\create the bar chart.
  • The 3D graphics were created using 3D Studio Max. The 2D graphics were created using Paint.Net (freeware).
  • The animation at the top of the walkthrough was created using DropBox (screen captures) and UnFREEz (gif creator). Both are freeware.
  • Please vote\comment. I appreciate any feedback you have.


"Share your knowledge. It's a way to achieve immortality." - Dalai Lama

And I think we're done. I hope you found this walkthrough useful. If you found any part confusing or if you think I missed something, please let me know so I can update this page.


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Architect Aon Insurance
United States United States
Mike Waddell

Professional C# developer who dabbles in web and mobile development. Currently working with C#, SQL Server, and Angular.

I also spend too much time playing video games and watching movies.

Certifications in .Net, SQL Server, and Java

Comments and Discussions

QuestionExcellent Pin
Member 1278441510-Oct-16 3:27
MemberMember 1278441510-Oct-16 3:27 
QuestionFountain... Best Example.. Pin
Member 992693015-Jun-13 2:54
MemberMember 992693015-Jun-13 2:54 
GeneralMy vote of 5 Pin
Member 800187824-Apr-13 22:41
MemberMember 800187824-Apr-13 22:41 
GeneralMy vote of 5 Pin
LaxmikantYadav22-Feb-13 2:00
MemberLaxmikantYadav22-Feb-13 2:00 
GeneralMy vote of 5 Pin
WebMaster3-Jan-13 17:29
MemberWebMaster3-Jan-13 17:29 
GeneralMy vote of 5 Pin
Aamer Alduais16-Jul-12 19:25
MemberAamer Alduais16-Jul-12 19:25 
GeneralMy vote of 5 Pin
Florian Rappl20-Feb-12 7:09
professionalFlorian Rappl20-Feb-12 7:09 
GeneralMy vote of 5 Pin
hertze_bogdan10-Nov-11 0:45
Memberhertze_bogdan10-Nov-11 0:45 
QuestionExcellent work Pin
Marcelo Ricardo de Oliveira5-Nov-11 6:56
MemberMarcelo Ricardo de Oliveira5-Nov-11 6:56 
QuestionWell done Mike! Pin
Espen Harlinn30-Oct-11 4:47
mvaEspen Harlinn30-Oct-11 4:47 
GeneralMy vote of 5 Pin
ErrolErrol24-Oct-11 9:09
MemberErrolErrol24-Oct-11 9:09 
GeneralRe: My vote of 5 Pin
mikew6730-Oct-11 4:36
Membermikew6730-Oct-11 4:36 
GeneralRe: My vote of 5 Pin
ErrolErrol30-Oct-11 8:03
MemberErrolErrol30-Oct-11 8:03 
GeneralMy vote of 5 Pin
Member 316494820-Oct-11 17:00
MemberMember 316494820-Oct-11 17:00 
GeneralMy vote of 5 Pin
Poiuy Terry19-Oct-11 1:23
MemberPoiuy Terry19-Oct-11 1:23 
GeneralMy vote of 5 Pin
Mel Padden17-Oct-11 22:25
MemberMel Padden17-Oct-11 22:25 
GeneralMy vote of 5 Pin
Sergio Andrés Gutiérrez Rojas17-Oct-11 9:45
MemberSergio Andrés Gutiérrez Rojas17-Oct-11 9:45 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.