**The particle swarm algorithm is an ¡°adaptive algorithm developed according to a social-psychological metaphor¡± as described by Dr. Russel C. Eberhart and Dr. James Kennedy.**

Position update and velocity update are the two primary operators present in the particle swarm. Each particle is activated to move toward the global best position and the particles¡¯ previous best position during each generation. For each particle, a new velocity value is calculated at each iteration with respect to its distance from the previous best position and global best position, and its current velocity. Then the next particle position in the search space is calculated using the new value. This process of defining the best particle position is continued for a number of times or until obtaining a minimum error - a ¡®trial and error¡¯ sequence.

## Basic Principle

The particle swarm algorithm is an optimization algorithm based on population, and it is inspired by the ﬂocking nature of birds and how these animals conduct flocking behaviour in flight. A group of potential solutions is known as a ¡°swarm¡± and each of the individual solutions inside the swarm is known as a ¡°particle¡±. The particles tend to fly in the search domain with the help of the experience of the swarm and their individual experience.

One particular Particle Swarm Optimization Algorithm has been suggested in a paper by Tripathi P.K., et al^{1} and describes its basic principle as follows: with a d-dimensional search space:

'the ith particle is related to the individual experience attribute Pi = (p i,1 , p i,2 , p i,d ), the velocity attribute Vi = (v i,1 , v i,2 , . . . , v i,d ) and the position attribute Xi = (x i,1 , x i,2 , . . . , x i,d )'.

The position article indicates the position of the particle, and the velocity attribute imparts motion to the particle. The parameter Pi collects the coordinates with respect to the best individual performance of the particle. Likewise, the index g captures the experience of the swarm corresponding to the particle with the best individual performance. However, the position and velocity attributes are updated to guide the particle movement towards the optimum solution.

The updated position and velocity attributes equations are given as follows:

vi,j = wv i,j + c1 r1 (pi,j − xi,j ) + c2 r2 (pg,j − xi,j ) (1)

xi,j = xi,j + vi,j (2)

- where j = 1, . . . , d and w, c1 , c2 ¡Ý 0
- w - inertia weight
- c1 and c2 - acceleration coefﬁcients, which enable the particles to regulate the social terms and cognition in equation (1)
- r1 and r2 - random numbers in the range [0, 1] for imparting randomness to the movement of swarm.

Exploration takes place with the larger value of c1 and exploitation takes place with the larger value of c2.*

^{*Equations adapted from Tripathi P.K. et al. Adaptive Multi-objective Particle Swarm Optimization Algorithm. 2007 IEEE Congress on Evolutionary Computation (CEC 2007). }

## Types and Applications

The following are the types of the particle swarm algorithm:

Binary Particle Swarm Optimizer

It functions on a principle that the probability of the decision of an individual will be true of false, yes or no, or any other decision that has been influences by social and personal factors.

P (xid (t) = 1) = f(xid (t − 1), vid (t − 1), pid , pgd )*

P (xid (t) = 0) = 1 − P (xid (t) = 1)*

^{*Equation adapted from Settles M. An Introduction to Particle Swarm Optimization. Department of Computer Science, University of Idaho, Moscow. }

Standard Particle Swarm Optimizer

The functional parameters can be considered as a point in real number space. The particles move in the space, which is heterogenous in relation to fitness.

xid (t) = f(xid (t − 1), vid (t − 1), pid , pgd )*

vid (t) = vid (t − 1) + c1 ϕ1 (pid − xid (t − 1)) + c2 ϕ2 (pgd − xid (t − 1))*

xid (t) = xid (t − 1) + vid (t)*

^{*Equation adapted from Settles M. An Introduction to Particle Swarm Optimization. Department of Computer Science, University of Idaho, Moscow. }

Particle Swarm Optimizer with Inertia

The previous velocity from the standard velocity equation is multiplied with the inertia weight and is reduced linearly. However, a non-zero inertia weight enables the particle moving in a direction during the previous iteration to continue in the same path.

xid (t) = f(w(t), xid (t − 1), vid (t − 1), pid , pgd )*

vid (t) = w(t) ∗ vid (t − 1) + c1 ϕ1 (pid − xid (t − 1)) + c2 ϕ2 (pgd − xid (t − 1))*

xid (t) = xid (t − 1) + vid (t)*

Particle Swarm Optimizer with Constriction Coefficient

In this type, particle swarms interactions are modelled with the help of complicated linear equations. The use of constriction coefficient resulted in particle convergence over time.

The particle swarm algorithm has the following applications:

Job Scheduling on Computational Grids

Grid computing has a major role in satisfying the developing computational needs as they have intelligent functions for grid service marketing, collaboration, security, resource management and so on. However, the key role of the computational grids is load sharing of computational jobs.

Data Mining

Although particle swarm optimization and data mining do not have any common traits, they can be jointly used to develop a method that provides a result unlike other methods, which are complicated and expensive.

## References

^{1}Tripathi P.K. et al. Adaptive Multi-objective Particle Swarm Optimization Algorithm. 2007 IEEE Congress on Evolutionary Computation (CEC 2007).- Settles M. An Introduction to Particle Swarm Optimization. Department of Computer Science, University of Idaho, Moscow.
- J. Kennedy and R. C. Eberhart. A discreet binary version of the particle swarm algorithm, 1997.