## Introduction

Ants always find a way: I used to find them in the garden of my house marching in ordered lines while carrying food to their nest. And they were indeed ordered: all the ants seemed to be following some path that my eyes weren’t able to see. Later, I found these paths weren’t only invisible: They were also optimal. Which is kind of surprising since ants have very limited sight and a very small brain.

The colony mechanism is as simple as it is surprising. When the ants traverse a terrain looking for food, they drop a chemical called pheromone. This means that the ants by chance have found shorter paths and have also deposited more pheromone on them. Because the paths are shorter - i.e., optimal - they can traverse it more times in the same amount of time.

Pheromone is special for ants for two reasons: They have senses to detect it and they feel attracted to it. This translates that ants will have a tendency to traverse paths that have stronger pheromone trails on them. The technical term for this is stigmergy, and is the mechanism that allows this amazing behavior:

- All ants of the colony go after a food. They randomly traverse the garden looking for a way to reach it.
- A very lucky ant gets there first, before any other fellow ant. He grabs the food, and returns it to the nest.
- New ants are assigned the mission to get food to the nest. These new ants find the pheromone deposited in 2, so they follow that lead.
- While the ant in 2 keeps going from the food to the nest, the other unlucky ants are still trying to find their way.
- Now the path discovered in 2 is very pheromone intensive, because it is being traversed several times by several ants. Even the ants that take inefficient paths in 1 are now taking the path discovered in 2 since it is so attractive now (i.e., pheromone intensive).
- Now all the colony follows the optimal path.

That means that the colony won’t starve, and that the system has converged to optimality.

## Background

But this is a programming article, not a biology one. In 1992, Marco Dorigo applied the strategy of the ants to the solution of optimization problems. There are problems in Computer Science where the search space is so vast that it is impossible to traverse it completely to find the optimal solution. So, what Dorigo proposed was to build Software-Ants that will traverse this search space and look for an optimal solution, like the real ants do. These artificial ants don’t guarantee us that they will find the best solution, but they will find as a solution that is good-enough for our needs. The algorithms of this kind are called heuristics.

The optimization problems that can be solved in this fashion are vast: I can mention now the Travelling Salesman Problem, the Knapsack Problem and even Image Processing tasks like Image Segmentation and Clustering. And since Dorigo’s proposal on 1992 several algorithms have appeared that follow the principle of using Artificial Ants, like Ant System, Ant Colony System, Max-Min Ant System and many others. These algorithms have being grouped in a meta-heuristic which is now known as Ant Colony Optimization (ACO).

## Using the Code

So, with Ant Colony Optimization, you can solve several problems using several algorithms. In order to save myself some lines of code, I have developed a little framework that allows an easy implementation of ACO algorithms in Java, called Isula (available for free here). I will use some snippets from Isula to demonstrate how to implement Ant Algorithms in Java.

This is a portion of the `solveProblem()`

method of the `AcoProblemSolver `

class:

while (iteration < numberOfIterations) {
antColony.clearAntSolutions();
antColony.buildSolutions(environment, configurationProvider);
applyDaemonActions(DaemonActionType.AFTER_ITERATION_CONSTRUCTION);
updateBestSolution(environment);
iteration++;
}

This is the main loop of any Ant Colony Algorithm. We need to establish a stop condition for our program, so we define a parameter to set the maximum number of iterations our Ants will be traversing the problem space, which in the ACO domain is usually a graph. On each iteration, three things happen:

- Our ants prepare themselves to build a solution: This can be a route, a cluster or any other entity which our problem requires us to find.
- The colony will traverse the problem graph and every ant will build a solution. Some of them will be good, some of them will be bad.
- There are some global actions that can be required in our algorithm. For example, we can rank our ants and only allow some of them to deposit pheromone. Those procedures are called Daemon Actions in ACO terminology.
- We store the best solution built so far. When we reach the maximum number of iterations, that will be the output of our program.

Now let’s take a deeper look into how the colony builds solutions. This is a snippet from the `buildSolution()`

method of the `AntColony `

class:

for (Ant<C, E> ant : hive) {
while (!ant.isSolutionReady(environment)) {
ant.selectNextNode(environment, configurationProvider);
}
ant.doAfterSolutionIsReady(environment, configurationProvider);
}

Some things to note there:

- The
`AntColony `

class manages a number of ants which are stored in the `hive `

attribute. It depends on your algorithm characteristics or a parameter of how many ants you want to use. - Each ant of the colony will add components to their solution until it is considered finished. For example, if we’re trying to cluster an image, we will continue adding pixels to the cluster until we have covered every pixel in our image.
- The selection of the next component is stochastic: the ant will choose according some probabilities. One key factor in their decision is the amount of pheromone related to a solution component, but several others will be considered depending on the algorithm you are using.
- Once an ant has finished its solution, some things might happen. You can improve the generated solution with a local search procedure, for example, or you can start depositing pheromone. Once again, that depends on the nature of your algorithm.

And, the more important class if you are working on an `Ant`

algorithm is the `Ant`

itself. Here’s a snippet of the `Ant`

class taken from Isula:

public abstract class Ant<C, E extends Environment> {
private int currentIndex = 0;
private List<AntPolicy<C, E>> policies = new ArrayList<AntPolicy<C, E>>();
private C[] solution;
private List<C> visitedComponents = new ArrayList<C>();

Notice this:

- This is a parametrized class. The class parameter
`C`

represents a component in our solution: In the image clustering example, this class would represent a pixel and the assigned cluster. - The solution is an array of these components. This Java
`Ant`

controls how the solution is built by keeping the current component index in the array. - The
`Ant`

behavior is also determined by the algorithm you are using or proposing. For example, the specific rule for selecting a component varies greatly from algorithm to algorithm. This specific ant behavior is modeled as an `AntPolicy `

in the Isula Framework.

Let’s put our colony to work in order to solve a specific optimization problem. The Flow Shop Scheduling optimization problem tries to find an optimal ordering of a number of jobs, given each job needs to be processed on some machines, requiring a specific amount of time on each machine depending on the job nature.

We will try to find an ordering of jobs - permutation in combinatorial terms - so we finish processing all the jobs in the minimum amount of time. Depending on the amount of jobs and machines, solving this problem precisely can take considerable time, so we rely on our Isula-based ants to find a good-enough solution:

FlowShopProblemSolver problemSolver;
ProblemConfiguration configurationProvider = new ProblemConfiguration();
problemSolver = new FlowShopProblemSolver(graph, configurationProvider);
configurationProvider.setEnvironment(problemSolver.getEnvironment());
problemSolver.addDaemonActions(
new StartPheromoneMatrixForMaxMin<Integer, FlowShopEnvironment>(),
new FlowShopUpdatePheromoneMatrix());
problemSolver.getAntColony().addAntPolicies(
new PseudoRandomNodeSelection<Integer, FlowShopEnvironment>(),
new ApplyLocalSearch());
problemSolver.solveProblem();
showSolution(graph, problemSolver);

That’s a snippet from an Isula-based Java program available here, which is an adaptation of the algorithm proposed by Thomas Stutzle in the paper “An ant approach to the flow shop problem”. Take a look at the following:

- The
`FlowShopProblemSolver `

is an extension of the `AcoProblemSolver `

class we reviewed before, with some minor modifications to suit the Flow-Shop problem. We call the `solveProblem() `

method in order to trigger the generation of the solution. - The paper proposes an adaptation of the well-known algorithm Max-Min Ant System, so we are reusing some code already available for that algorithm, like the daemon action for initializing the pheromone matrix. The Update Pheromone Matrix policy was also reused but some adaptations were required.
- A very popular node-selection rule was proposed by Dorigo in its Ant Colony System algorithm. Stutzle used the same in its paper, so we can reuse it directly from the implementation available in Isula code.
- The author proposes a Local Search procedure after an Ant finishes building its solution. We developed a custom Ant Policy for that task.

We will try our algorithm with a problem instance of 20 jobs and 5 machines (dataset available here). Here’s a visual representation of the solution found:

## Points of Interest

I hope this article helps you get a feeling of what you can accomplish through Ant Colony algorithms, and how to do it in the Java Programming language. I also invite you to take a look at the Isula Framework and try to use the code already available. I’d like to finish with some Java Projects that use ACO algorithms to solve optimization problems: