A Tutorial on Optimizing Particle Swarms in Python

There are several approaches that can be taken to maximize or minimize a function in order to find the optimal value. Despite the fact that there are several optimization approaches that can be used, none is considered ideal for any given case. Therefore, each of the optimization approaches has its own advantages and limitations. Particle Swarm Optimization (PSO) is also an optimization technique belonging to the field of nature-inspired computing. It is an algorithm which searches for the best solution in space in a simple way. It is the method that optimizes a problem by iteratively trying to improve a candidate solution against a given quality measure. In this article, we will discuss in detail the optimization of the particle swarm as well as how it works and its different variations. We will also learn the practical implementation of PSO using the PySwarms python package. We will cover the following main points in this article.

Contents

  1. Particle Swarm Optimization (PSO)
  2. Interior work
  3. Variants of PSO
  4. Implementing PSO Using PySwarms

Let’s start the discussion by understanding the Particle Swarm Optimization (PSO) algorithm.

Register for this Free Session>>

Particle Swarm Optimization (PSO)

Several studies on the social behavior of groups of animals were developed in the early 1990s. These investigations revealed that some creatures of a specific group, namely birds and fish, are able to transmit knowledge between themselves and themselves. that this ability gives these animals a significant survival advantage. Inspired by this research, Kennedy and Eberhart introduced the PSO algorithm in 1995, a metaheuristic algorithm suitable for the optimization of nonlinear continuous functions. The algorithm was inspired by the concept of swarm intelligence, which is commonly presented in groups of animals such as herds and schools.

As noted in the original study, fish or a flock of birds moving in groups “can benefit from the experience of all the other members”. In other words, if a bird flies at random in search of food, all the birds in the flock can share their findings and help the whole flock get the best hunt. Although we can imitate the movement of a flock of birds, we can also assume that each bird helps us find the optimal solution in a large solution space, with the best solution found by the flock being the best solution in space. .

Internal workings of PSO

Researchers believe that swarm behavior differs between exploratory behavior (searching for a larger part of the search space) and exploitative behavior looking for a smaller region of the search space to come closer to an optimum. (potentially local). Since the creation of PSO, according to the researchers, the PSO algorithm and its parameters must be designed to find an appropriate balance between exploration and exploitation in order to avoid early convergence towards a local optimum while ensuring a good rate. of convergence towards the optimum.

Convergence

In PSO convergence, regardless of how the swarm operates, convergence to a local optimum occurs when all personal bests P or, alternatively, the best-known position of the swarm g approaches a local optimum to the problem.

The ability of a PSO algorithm to explore and exploit can be affected by its topological structure; that is, with a different structure, the speed of convergence of the algorithm and its ability to avoid premature convergence on the same optimization problem will be different because a topological structure determines the speed or direction of sharing research information for each particle. The two most common topological structures are the global star and the local ring.

A PSO with a global star structure, in which all particles are connected, has the shortest average distance in the swarm, while a PSO with a local ring structure, in which each particle is connected to two particles close, at the highest average distance in the swarm.

The experimental study examines two commonly used architectures, the global star structure (Figure 1a) and the local ring structure (Figure 1b). There are 16 particles in each group. It should be emphasized that the closest particle in the local structure is mainly determined by the particle index.

Adaptive mechanism

An adaptive mechanism can be implemented without having to compromise between convergence (“exploitation”) and divergence (“exploration”). Adaptive Particle Swarm Optimization (APSO) outperforms Normal Particle Swarm Optimization (PSO). With faster convergence time, APSO can perform global searches throughout the search space.

It allows the inertia weight, acceleration coefficients and other calculation factors to be modified in real time, which increases the effectiveness and efficiency of the search. APSO can also operate on the best particle globally to get out of the most likely local optima.

Variants of PSO

Even a simple PSO algorithm can have many variations. There are different ways to initialize particles and velocities (e.g. start with zero velocities), update only PI and G after updating the whole swarm, and so on.

Degraded PSO

To construct gradient-based PSO algorithms, the ability of the PSO algorithm to efficiently explore many local minima can be combined with the ability of gradient-based local search algorithms to efficiently calculate an accurate local minimum.

The PSO algorithm is used in gradient PSO algorithms to explore multiple local minima and discover a location in the catchment area of ​​a deep local minimum. The deep local minimum is then correctly located using efficient gradient-based local search techniques.

Hybrid PSO

In order to increase optimization performance, new, more advanced PSO variants are introduced. There are some developments in this study, such as the development of a hybrid optimization approach that combines PSO with other optimizers, such as the combination of PSO with an optimization based on biogeography and including a learning mechanism effective.

Implementing particle swarm optimization using PySpwarms

PySwarms is a Python-based tool for optimizing particle swarms. It is used by researchers, practitioners, and swarm intelligence students who want to use a high-level declarative interface to apply PSO to their problems. PySwarms offers interaction with swarm optimizations and basic optimization with PSO.

PySwarms implements techniques for optimizing swarms of many particles at a high level. As a result, it aspires to be user-friendly and adaptable. Support modules can also be used to help you with your optimization problem.

See also

In this section, we are going to implement the world’s best optimizer using the functional API of PySwarms pyswarms.single.GBestPSO. We are also going to plot the functions in a 2D and 3D way.

PySwarm can be installed directly by pip install pyswarms

Optimize the Sphere function
# Import PySwarms
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx

We will work on improving the function of the sphere. Let’s put some arbitrary parameters in our optimizers for now. At the very least, there are three steps to optimization:

  • To configure the swarm as a dict, set the hyperparameters.
  • Pass the dictionary with the relevant entries to create an instance of the optimizer.
  • Invoke the optimize() method, and tell it to save the best cost and the best position in a variable.
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
cost, pos = optimizer.optimize(fx.sphere, iters=1000)

This will run the optimizer for 1000 iterations before returning the best cost and best position of the swarm.

View the function

PySwarms includes tools to visualize the behavior of your swarm. These are built on top of matplotlib, resulting in user-friendly and highly customizable graphics. The plotter module has two animation methods: plot contour() and plot surface(). These approaches, as the name suggests, trace particles in 2D or 3D space.

In order to plot the sphere function, we need to add meshes to our swarm. This allows us to see graphically where the particles are in relation to our objective function. By using the Mesher class, we can achieve this.

import matplotlib.pyplot as plt
from pyswarms.utils.plotters import plot_contour, plot_surface
from pyswarms.utils.plotters.formatters import Designer
from pyswarms.utils.plotters.formatters import Mesher

The pyswarms.utils.plotters.formatters module contains many trainers to customize your plots and visualizations. Besides Mesher, there is a Designer class for changing the size of fonts, size of figures, etc., as well as an Animator class for defining delays and repeats for the animation.

2D graphics
m = Mesher(func=fx.sphere)
# Make animation
animation = plot_contour(pos_history=optimizer.pos_history,
                         mesher=m,
                         mark=(0,0)) # Mark minima
animation.save('mymovie.mp4')
3D plot
# preprocessing
pos_history_3d = m.compute_history_3d(optimizer.pos_history)
# adjust the figure
d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)], label=['x-axis', 'y-axis', 'z-axis'])
# Make animation
animation3d = plot_surface(pos_history=pos_history_3d, 
                           mesher=m, designer=d,       
                           mark=(0,0,0))  # Mark minima

Final words

In this article, we have seen the theory behind the PSO by knowing how its internal mechanism works. Plus, we’ve seen a few of its variations, just how PSO has been used in different areas by the community. And finally, we got hands-on experience on PSO by taking advantage of the python-based PySwarms library.

The references


Join our Discord server. Be part of an engaging online community. Join here.


Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.

Comments are closed.