How to Solve an Algorithm Problem? | With Examples

developer team coding javascript

If you’re stuck on an algorithm problem and not sure how to proceed, this blog post is for you! We’ll go over some general tips on solving algorithm problems, as well as a specific example of an algorithm .

Table of content:

Introduction.

  • What are the 4 steps of algorithmic problem solving?
  • Conclusion and References

[365 Toy Project: 022/365] Batman: Scarlet Part 4 - Solve the problem - Solve an Algorithm.

What is an Algorithm?

What is an algorithm? Put simply, an algorithm is a step-by-step procedure for solving a problem. Algorithms can be written in any programming language, but they all share some common characteristics. First and foremost, algorithms are sequence tasks . That means that the steps in the algorithm must be done in order, and each step depends on the results of the previous steps. Secondly, algorithms are deterministic: given the same input data, exactly the same program will produce the same output every time. Finally, there are some measures for an algorithm to be efficient. Time and space: those two measures determine how efficient your algorithm is.

Computer geometric digital connection structure. Business inteligence technology background. Wirefra

How to Solve an Algorithm Problem?

We’ll walk through some steps for solving a particular algorithm .

First, it’s important to know the basics of algorithms: every problem can be broken down into a sequence of steps that can be solved. This is known as the analysis of algorithms . Sketch out the basic structure of your algorithm on paper first or a board to avoid confusion. write some thoughts about which does what.

If a problem involves mathematical operations, conceptualizing it in terms of equations may be helpful. Break down the equation into its component parts and see if any of those particular pieces can be simplified or eliminated altogether. If so, that will lead to a solution for the whole equation.

Another strategy is to try reversing what initially seems like an impossible task. Algorithms problems often have stages were doing something one-way results in an error message or produces no useful output at all. Reverse engineer those steps and see if anything productive comes out.

What are the 4 steps of algorithmic problem-solving? and Example #1

Now that you know what an algorithm is, let’s jump into some examples and take a look at how these techniques can be put into practice…

In the following we will use the problem challenge from leetcode number #387 :

1) Understand the problem

The goal of any algorithm is to solve a problem.  When solving an algorithm problem, it is important to understand the problem and the steps involved in solving it. This understanding will allow you to correctly follow the instructions and complete the task. Answer the common questions to be sure that you really understand the problem. Questions like what it does and what kind of input I’m expecting. what kind of output I should receive? Are there some exceptions I should be aware of? etc…

The goal of this challenge is to write a function that takes in a string and returns the index of the first letter in the string that does not repeat. For example, if the string is “nerd”, the function should return 0, because the first letter “n” is the first non-repeating letter in the string. If the string is “abcdefg”, the function should return 0, because the first letter “a” is the first non-repeating letter in the string.

If the string does not contain any non-repeating letters, the function should return -1. For example, if the input string is “abcabc”, the function should return -1, because no letter in the string is unique or non-repeating.

2) Break the problem down

When solving algorithms problems , breaking them down into smaller parts is usually the best way to go. Once you understand how each part works and interacts with the others, you can solve the problem more quickly.

To solve this challenge, we need to do the following:

  • Iterate over the letters in the input string, and for each letter, keep track of how many times it appears in the string. We can do this using an object, dictionary, or map, where the keys are the letters in the string, and the values are the counts for each letter.
  • Once we have counted the occurrences of each letter in the string, we need to find the index of the first letter that has a count of 1 (i.e. the first non-repeating letter). To do this, we can iterate over the letters in the string again, and for each letter, check if its count in the object/dictionary is 1. If it is, we return the index of the letter.
  • If we reach the end of the loop without finding any value that has only 1 or non-repeating letters, we return -1 to indicate that no non-repeating letters were found.

3) Find your solution

We found one solution and the key steps in this solution are to keep track of the counts of each letter in the input string, and then to search for the first letter with a count of 1.  If the count of this letter is 1 meaning that this letter only shows one time in the string. These steps can be implemented in a variety of ways, depending on the language and tools you are using to solve the challenge.

We will be demonstrating the solution with two programming languages JavaScript and Python:

The source code in JavaScript:

Here are some examples of how this function can be used:

The source code in Python :

4) Check your solution

Checking your solution again answers some questions like can I write a better code? by better I mean is the code I provided covering all cases of inputs? is it efficient? and by efficient, I mean using fewer computer resources when possible. If you’re comfortable with your answers then check if there is another solution out there for the same problem you solved, if any are found. go through them I learned a lot by doing that. Also get some feedback on your code, that way you’ll learn many ways of approaching a problem to solve it.

As we mentioned above we asked ourselves these questions but the algorithm we wrote couldn’t go through all the different cases successfully: for example, The code can’t handle the case when we handled two cases of the same character “L” and “l” in the word “Level”

So we need to address that in the following:

Now let’s revisit the code that we wrote up and see if we can come up with another solution that will cover the all different cases of a character.

The source code in JavaScript :

Now that you learned from the first example, let’s jump into another challenge and apply the same techniques we used above:

This example from leetcode problem #125 Valid Palindrome

Learn as much information as you can about the problem what is a Palindrome? Ask yourself what input you expect and what output is expected.

The Palindrome is a word, phrase, or sentence that is spelled backward as forwards. We expect a string and we can do a function that first cleans that string from any non-alphanumeric, then reverses it and compares it with the original string.

Here is a step-by-step explanation of the algorithm in plain English:

  • Convert all uppercase letters in the string to lowercase. This is done so that the case of the letters in the string does not affect the outcome of the comparison.
  • Remove all non-alphanumeric characters from the string. This is done because only alphanumeric characters are relevant for determining if a string is a palindrome.
  • Reverse the resulting string. This is done so that we can compare the reversed string with the original string.
  • Compare the reversed string with the original string. If they are the same, then the original string is a palindrome. Otherwise, it is not.

Here is an example of how this algorithm would work on the string “A man, a plan, a canal: Panama”:

  • The string is converted to lowercase, so it becomes “a man, a plan, a canal: panama”.
  • All non-alphanumeric characters are removed, so the string becomes “amanaplanacanalpanama”.
  • The string is reversed, so it becomes “amanaplanacanalpanama”.
  • The reversed string is compared with the original string, and since they are the same, the function returns true, indicating that the original string is a palindrome.

According to the steps we wrote in the previous stage, let’s put them into action and code it up.

The source code in Python:

We can make it more efficient by using the pointers method let’s break it down into a few points below:

  • Create left and right pointers (will be represented by indices)
  • Make each pointer move to the middle direction
  • While moving to check each letter for both pointers are the same

Quantum computer technology concept. Deep learning artificial intelligence. Big data algorithms visu

Conclusion and references

There are many resources available to help you out, and with more practice , you’ll be able to solve many algorithm problems that come your way . This video is one of the great resources to learn about algorithms and data structures from FreeCodeCamp

It is important to keep a cool head and not get bogged down in frustration. Algorithms problems can be difficult, but with patience and perseverance, they can be solved.

When you are solving an algorithm problem, it is important to be able to check your solution against other solutions if possible to see how others approach the same problem. This is will help you to retain and understand the logic behind the different solutions so you’ll need this kind of knowledge in the future solving problems.

By following the steps outlined in this article, you will be able to solve algorithm problems with ease. Remember to keep a notebook or excel sheet with all of your solutions so that you can revisit them later on that way you will gain maximum retention of the logic.

If you want to learn more about algorithms and programming , sign up for our mailing list. We’ll send you tips, tricks, and resources to help you improve your skills.

algorithm problem solving example

Ahmed Radwan

Reach out if you want to join me and write articles with the nerds 🙂

How to Make Your Corporate Website Stand Out With These 10 Tips

How to write clean code and best practices, you may also like, javascript: resolved and rejected promises, here are some new features in javascript, top 100 problems on leetcode and its solutions – part1.

  • Smart Devices
  • Programming
  • Web Development
  • Presentation
  • Infographics

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Earn your MBA and SM in engineering with this transformative two-year program.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

An interdisciplinary program that combines engineering, management, and design, leading to a master’s degree in engineering and management.

Executive Programs

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

This 20-month MBA program equips experienced executives to enhance their impact on their organizations and the world.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

How business innovation can help solve the housing crisis

How AI is transforming logistics

6 ways to help employees embrace data-driven initiatives

Credit: Alejandro Giraldo

Ideas Made to Matter

How to use algorithms to solve everyday problems

Kara Baskin

May 8, 2017

How can I navigate the grocery store quickly? Why doesn’t anyone like my Facebook status? How can I alphabetize my bookshelves in a hurry? Apple data visualizer and MIT System Design and Management graduate Ali Almossawi solves these common dilemmas and more in his new book, “ Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier ,” a quirky, illustrated guide to algorithmic thinking. 

For the uninitiated: What is an algorithm? And how can algorithms help us to think smarter?

An algorithm is a process with unambiguous steps that has a beginning and an end, and does something useful.

Algorithmic thinking is taking a step back and asking, “If it’s the case that algorithms are so useful in computing to achieve predictability, might they also be useful in everyday life, when it comes to, say, deciding between alternative ways of solving a problem or completing a task?” In all cases, we optimize for efficiency: We care about time or space.

Note the mention of “deciding between.” Computer scientists do that all the time, and I was convinced that the tools they use to evaluate competing algorithms would be of interest to a broad audience.

Why did you write this book, and who can benefit from it?

All the books I came across that tried to introduce computer science involved coding. My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms, which form the basis for the field of computing and have far-reaching applications and uses.

I wrote the book with two audiences in mind. One, anyone, be it a learner or an educator, who is interested in computer science and wants an engaging and lighthearted, but not a dumbed-down, introduction to the field. Two, anyone who is already familiar with the field and wants to experience a way of explaining some of the fundamental concepts in computer science differently than how they’re taught.

I’m going to the grocery store and only have 15 minutes. What do I do?

Do you know what the grocery store looks like ahead of time? If you know what it looks like, it determines your list. How do you prioritize things on your list? Order the items in a way that allows you to avoid walking down the same aisles twice.

For me, the intriguing thing is that the grocery store is a scene from everyday life that I can use as a launch pad to talk about various related topics, like priority queues and graphs and hashing. For instance, what is the most efficient way for a machine to store a prioritized list, and what happens when the equivalent of you scratching an item from a list happens in the machine’s list? How is a store analogous to a graph (an abstraction in computer science and mathematics that defines how things are connected), and how is navigating the aisles in a store analogous to traversing a graph?

Nobody follows me on Instagram. How do I get more followers?

The concept of links and networks, which I cover in Chapter 6, is relevant here. It’s much easier to get to people whom you might be interested in and who might be interested in you if you can start within the ball of links that connects those people, rather than starting at a random spot.

You mention Instagram: There, the hashtag is one way to enter that ball of links. Tag your photos, engage with users who tag their photos with the same hashtags, and you should be on your way to stardom.

What are the secret ingredients of a successful Facebook post?

I’ve posted things on social media that have died a sad death and then posted the same thing at a later date that somehow did great. Again, if we think of it in terms that are relevant to algorithms, we’d say that the challenge with making something go viral is really getting that first spark. And to get that first spark, a person who is connected to the largest number of people who are likely to engage with that post, needs to share it.

With [my first book], “Bad Arguments,” I spent a month pouring close to $5,000 into advertising for that project with moderate results. And then one science journalist with a large audience wrote about it, and the project took off and hasn’t stopped since.

What problems do you wish you could solve via algorithm but can’t?

When we care about efficiency, thinking in terms of algorithms is useful. There are cases when that’s not the quality we want to optimize for — for instance, learning or love. I walk for several miles every day, all throughout the city, as I find it relaxing. I’ve never asked myself, “What’s the most efficient way I can traverse the streets of San Francisco?” It’s not relevant to my objective.

Algorithms are a great way of thinking about efficiency, but the question has to be, “What approach can you optimize for that objective?” That’s what worries me about self-help: Books give you a silver bullet for doing everything “right” but leave out all the nuances that make us different. What works for you might not work for me.

Which companies use algorithms well?

When you read that the overwhelming majority of the shows that users of, say, Netflix, watch are due to Netflix’s recommendation engine, you know they’re doing something right.

Related Articles

Cars and trucks on the highway with AI/digital graphics overlayed on top

  • 1. Micro-Worlds
  • 2. Light-Bot in Java
  • 3. Jeroos of Santong Island
  • 4. Problem Solving and Algorithms
  • 5. Creating Jeroo Methods
  • 6. Conditionally Executing Actions
  • 7. Repeating Actions
  • 8. Handling Touch Events
  • 9. Adding Text to the Screen

Problem Solving and Algorithms

Learn a basic process for developing a solution to a problem. Nothing in this chapter is unique to using a computer to solve a problem. This process can be used to solve a wide variety of problems, including ones that have nothing to do with computers.

Problems, Solutions, and Tools

I have a problem! I need to thank Aunt Kay for the birthday present she sent me. I could send a thank you note through the mail. I could call her on the telephone. I could send her an email message. I could drive to her house and thank her in person. In fact, there are many ways I could thank her, but that's not the point. The point is that I must decide how I want to solve the problem, and use the appropriate tool to implement (carry out) my plan. The postal service, the telephone, the internet, and my automobile are tools that I can use, but none of these actually solves my problem. In a similar way, a computer does not solve problems, it's just a tool that I can use to implement my plan for solving the problem.

Knowing that Aunt Kay appreciates creative and unusual things, I have decided to hire a singing messenger to deliver my thanks. In this context, the messenger is a tool, but one that needs instructions from me. I have to tell the messenger where Aunt Kay lives, what time I would like the message to be delivered, and what lyrics I want sung. A computer program is similar to my instructions to the messenger.

The story of Aunt Kay uses a familiar context to set the stage for a useful point of view concerning computers and computer programs. The following list summarizes the key aspects of this point of view.

A computer is a tool that can be used to implement a plan for solving a problem.

A computer program is a set of instructions for a computer. These instructions describe the steps that the computer must follow to implement a plan.

An algorithm is a plan for solving a problem.

A person must design an algorithm.

A person must translate an algorithm into a computer program.

This point of view sets the stage for a process that we will use to develop solutions to Jeroo problems. The basic process is important because it can be used to solve a wide variety of problems, including ones where the solution will be written in some other programming language.

An Algorithm Development Process

Every problem solution starts with a plan. That plan is called an algorithm.

There are many ways to write an algorithm. Some are very informal, some are quite formal and mathematical in nature, and some are quite graphical. The instructions for connecting a DVD player to a television are an algorithm. A mathematical formula such as πR 2 is a special case of an algorithm. The form is not particularly important as long as it provides a good way to describe and check the logic of the plan.

The development of an algorithm (a plan) is a key step in solving a problem. Once we have an algorithm, we can translate it into a computer program in some programming language. Our algorithm development process consists of five major steps.

Step 1: Obtain a description of the problem.

Step 2: analyze the problem., step 3: develop a high-level algorithm., step 4: refine the algorithm by adding more detail., step 5: review the algorithm..

This step is much more difficult than it appears. In the following discussion, the word client refers to someone who wants to find a solution to a problem, and the word developer refers to someone who finds a way to solve the problem. The developer must create an algorithm that will solve the client's problem.

The client is responsible for creating a description of the problem, but this is often the weakest part of the process. It's quite common for a problem description to suffer from one or more of the following types of defects: (1) the description relies on unstated assumptions, (2) the description is ambiguous, (3) the description is incomplete, or (4) the description has internal contradictions. These defects are seldom due to carelessness by the client. Instead, they are due to the fact that natural languages (English, French, Korean, etc.) are rather imprecise. Part of the developer's responsibility is to identify defects in the description of a problem, and to work with the client to remedy those defects.

The purpose of this step is to determine both the starting and ending points for solving the problem. This process is analogous to a mathematician determining what is given and what must be proven. A good problem description makes it easier to perform this step.

When determining the starting point, we should start by seeking answers to the following questions:

What data are available?

Where is that data?

What formulas pertain to the problem?

What rules exist for working with the data?

What relationships exist among the data values?

When determining the ending point, we need to describe the characteristics of a solution. In other words, how will we know when we're done? Asking the following questions often helps to determine the ending point.

What new facts will we have?

What items will have changed?

What changes will have been made to those items?

What things will no longer exist?

An algorithm is a plan for solving a problem, but plans come in several levels of detail. It's usually better to start with a high-level algorithm that includes the major part of a solution, but leaves the details until later. We can use an everyday example to demonstrate a high-level algorithm.

Problem: I need a send a birthday card to my brother, Mark.

Analysis: I don't have a card. I prefer to buy a card rather than make one myself.

High-level algorithm:

Go to a store that sells greeting cards Select a card Purchase a card Mail the card

This algorithm is satisfactory for daily use, but it lacks details that would have to be added were a computer to carry out the solution. These details include answers to questions such as the following.

"Which store will I visit?"

"How will I get there: walk, drive, ride my bicycle, take the bus?"

"What kind of card does Mark like: humorous, sentimental, risqué?"

These kinds of details are considered in the next step of our process.

A high-level algorithm shows the major steps that need to be followed to solve a problem. Now we need to add details to these steps, but how much detail should we add? Unfortunately, the answer to this question depends on the situation. We have to consider who (or what) is going to implement the algorithm and how much that person (or thing) already knows how to do. If someone is going to purchase Mark's birthday card on my behalf, my instructions have to be adapted to whether or not that person is familiar with the stores in the community and how well the purchaser known my brother's taste in greeting cards.

When our goal is to develop algorithms that will lead to computer programs, we need to consider the capabilities of the computer and provide enough detail so that someone else could use our algorithm to write a computer program that follows the steps in our algorithm. As with the birthday card problem, we need to adjust the level of detail to match the ability of the programmer. When in doubt, or when you are learning, it is better to have too much detail than to have too little.

Most of our examples will move from a high-level to a detailed algorithm in a single step, but this is not always reasonable. For larger, more complex problems, it is common to go through this process several times, developing intermediate level algorithms as we go. Each time, we add more detail to the previous algorithm, stopping when we see no benefit to further refinement. This technique of gradually working from a high-level to a detailed algorithm is often called stepwise refinement .

The final step is to review the algorithm. What are we looking for? First, we need to work through the algorithm step by step to determine whether or not it will solve the original problem. Once we are satisfied that the algorithm does provide a solution to the problem, we start to look for other things. The following questions are typical of ones that should be asked whenever we review an algorithm. Asking these questions and seeking their answers is a good way to develop skills that can be applied to the next problem.

Does this algorithm solve a very specific problem or does it solve a more general problem ? If it solves a very specific problem, should it be generalized?

For example, an algorithm that computes the area of a circle having radius 5.2 meters (formula π*5.2 2 ) solves a very specific problem, but an algorithm that computes the area of any circle (formula π*R 2 ) solves a more general problem.

Can this algorithm be simplified ?

One formula for computing the perimeter of a rectangle is:

length + width + length + width

A simpler formula would be:

2.0 * ( length + width )

Is this solution similar to the solution to another problem? How are they alike? How are they different?

For example, consider the following two formulae:

Rectangle area = length * width Triangle area = 0.5 * base * height

Similarities: Each computes an area. Each multiplies two measurements.

Differences: Different measurements are used. The triangle formula contains 0.5.

Hypothesis: Perhaps every area formula involves multiplying two measurements.

Example 4.1: Pick and Plant

This section contains an extended example that demonstrates the algorithm development process. To complete the algorithm, we need to know that every Jeroo can hop forward, turn left and right, pick a flower from its current location, and plant a flower at its current location.

Problem Statement (Step 1)

A Jeroo starts at (0, 0) facing East with no flowers in its pouch. There is a flower at location (3, 0). Write a program that directs the Jeroo to pick the flower and plant it at location (3, 2). After planting the flower, the Jeroo should hop one space East and stop. There are no other nets, flowers, or Jeroos on the island.

StartFinish

Analysis of the Problem (Step 2)

The flower is exactly three spaces ahead of the jeroo.

The flower is to be planted exactly two spaces South of its current location.

The Jeroo is to finish facing East one space East of the planted flower.

There are no nets to worry about.

High-level Algorithm (Step 3)

Let's name the Jeroo Bobby. Bobby should do the following:

Get the flower Put the flower Hop East

Detailed Algorithm (Step 4)

Get the flower Hop 3 times Pick the flower Put the flower Turn right Hop 2 times Plant a flower Hop East Turn left Hop once

Review the Algorithm (Step 5)

The high-level algorithm partitioned the problem into three rather easy subproblems. This seems like a good technique.

This algorithm solves a very specific problem because the Jeroo and the flower are in very specific locations.

This algorithm is actually a solution to a slightly more general problem in which the Jeroo starts anywhere, and the flower is 3 spaces directly ahead of the Jeroo.

Java Code for "Pick and Plant"

A good programmer doesn't write a program all at once. Instead, the programmer will write and test the program in a series of builds. Each build adds to the previous one. The high-level algorithm will guide us in this process.

FIRST BUILD

To see this solution in action, create a new Greenfoot4Sofia scenario and use the Edit Palettes Jeroo menu command to make the Jeroo classes visible. Right-click on the Island class and create a new subclass with the name of your choice. This subclass will hold your new code.

The recommended first build contains three things:

The main method (here myProgram() in your island subclass).

Declaration and instantiation of every Jeroo that will be used.

The high-level algorithm in the form of comments.

The instantiation at the beginning of myProgram() places bobby at (0, 0), facing East, with no flowers.

Once the first build is working correctly, we can proceed to the others. In this case, each build will correspond to one step in the high-level algorithm. It may seem like a lot of work to use four builds for such a simple program, but doing so helps establish habits that will become invaluable as the programs become more complex.

SECOND BUILD

This build adds the logic to "get the flower", which in the detailed algorithm (step 4 above) consists of hopping 3 times and then picking the flower. The new code is indicated by comments that wouldn't appear in the original (they are just here to call attention to the additions). The blank lines help show the organization of the logic.

By taking a moment to run the work so far, you can confirm whether or not this step in the planned algorithm works as expected.

THIRD BUILD

This build adds the logic to "put the flower". New code is indicated by the comments that are provided here to mark the additions.

FOURTH BUILD (final)

Example 4.2: replace net with flower.

This section contains a second example that demonstrates the algorithm development process.

There are two Jeroos. One Jeroo starts at (0, 0) facing North with one flower in its pouch. The second starts at (0, 2) facing East with one flower in its pouch. There is a net at location (3, 2). Write a program that directs the first Jeroo to give its flower to the second one. After receiving the flower, the second Jeroo must disable the net, and plant a flower in its place. After planting the flower, the Jeroo must turn and face South. There are no other nets, flowers, or Jeroos on the island.

Jeroo_2 is exactly two spaces behind Jeroo_1.

The only net is exactly three spaces ahead of Jeroo_2.

Each Jeroo has exactly one flower.

Jeroo_2 will have two flowers after receiving one from Jeroo_1. One flower must be used to disable the net. The other flower must be planted at the location of the net, i.e. (3, 2).

Jeroo_1 will finish at (0, 1) facing South.

Jeroo_2 is to finish at (3, 2) facing South.

Each Jeroo will finish with 0 flowers in its pouch. One flower was used to disable the net, and the other was planted.

Let's name the first Jeroo Ann and the second one Andy.

Ann should do the following: Find Andy (but don't collide with him) Give a flower to Andy (he will be straight ahead) After receiving the flower, Andy should do the following: Find the net (but don't hop onto it) Disable the net Plant a flower at the location of the net Face South
Ann should do the following: Find Andy Turn around (either left or right twice) Hop (to location (0, 1)) Give a flower to Andy Give ahead Now Andy should do the following: Find the net Hop twice (to location (2, 2)) Disable the net Toss Plant a flower at the location of the net Hop (to location (3, 2)) Plant a flower Face South Turn right

The high-level algorithm helps manage the details.

This algorithm solves a very specific problem, but the specific locations are not important. The only thing that is important is the starting location of the Jeroos relative to one another and the location of the net relative to the second Jeroo's location and direction.

Java Code for "Replace Net with Flower"

As before, the code should be written incrementally as a series of builds. Four builds will be suitable for this problem. As usual, the first build will contain the main method, the declaration and instantiation of the Jeroo objects, and the high-level algorithm in the form of comments. The second build will have Ann give her flower to Andy. The third build will have Andy locate and disable the net. In the final build, Andy will place the flower and turn East.

This build creates the main method, instantiates the Jeroos, and outlines the high-level algorithm. In this example, the main method would be myProgram() contained within a subclass of Island .

This build adds the logic for Ann to locate Andy and give him a flower.

This build adds the logic for Andy to locate and disable the net.

This build adds the logic for Andy to place a flower at (3, 2) and turn South.

  • Admiral “Amazing Grace” Hopper

Exploring the Intricacies of NP-Completeness in Computer Science

Understanding p vs np problems in computer science: a primer for beginners, understanding key theoretical frameworks in computer science: a beginner’s guide.

Learn Computer Science with Python

Learn Computer Science with Python

CS is a journey, not a destination

  • Foundations

Understanding Algorithms: The Key to Problem-Solving Mastery

algorithm problem solving example

The world of computer science is a fascinating realm, where intricate concepts and technologies continuously shape the way we interact with machines. Among the vast array of ideas and principles, few are as fundamental and essential as algorithms. These powerful tools serve as the building blocks of computation, enabling computers to solve problems, make decisions, and process vast amounts of data efficiently.

An algorithm can be thought of as a step-by-step procedure or a set of instructions designed to solve a specific problem or accomplish a particular task. It represents a systematic approach to finding solutions and provides a structured way to tackle complex computational challenges. Algorithms are at the heart of various applications, from simple calculations to sophisticated machine learning models and complex data analysis.

Understanding algorithms and their inner workings is crucial for anyone interested in computer science. They serve as the backbone of software development, powering the creation of innovative applications across numerous domains. By comprehending the concept of algorithms, aspiring computer science enthusiasts gain a powerful toolset to approach problem-solving and gain insight into the efficiency and performance of different computational methods.

In this article, we aim to provide a clear and accessible introduction to algorithms, focusing on their importance in problem-solving and exploring common types such as searching, sorting, and recursion. By delving into these topics, readers will gain a solid foundation in algorithmic thinking and discover the underlying principles that drive the functioning of modern computing systems. Whether you’re a beginner in the world of computer science or seeking to deepen your understanding, this article will equip you with the knowledge to navigate the fascinating world of algorithms.

What are Algorithms?

At its core, an algorithm is a systematic, step-by-step procedure or set of rules designed to solve a problem or perform a specific task. It provides clear instructions that, when followed meticulously, lead to the desired outcome.

Consider an algorithm to be akin to a recipe for your favorite dish. When you decide to cook, the recipe is your go-to guide. It lists out the ingredients you need, their exact quantities, and a detailed, step-by-step explanation of the process, from how to prepare the ingredients to how to mix them, and finally, the cooking process. It even provides an order for adding the ingredients and specific times for cooking to ensure the dish turns out perfect.

In the same vein, an algorithm, within the realm of computer science, provides an explicit series of instructions to accomplish a goal. This could be a simple goal like sorting a list of numbers in ascending order, a more complex task such as searching for a specific data point in a massive dataset, or even a highly complicated task like determining the shortest path between two points on a map (think Google Maps). No matter the complexity of the problem at hand, there’s always an algorithm working tirelessly behind the scenes to solve it.

Furthermore, algorithms aren’t limited to specific programming languages. They are universal and can be implemented in any language. This is why understanding the fundamental concept of algorithms can empower you to solve problems across various programming languages.

The Importance of Algorithms

Algorithms are indisputably the backbone of all computational operations. They’re a fundamental part of the digital world that we interact with daily. When you search for something on the web, an algorithm is tirelessly working behind the scenes to sift through millions, possibly billions, of web pages to bring you the most relevant results. When you use a GPS to find the fastest route to a location, an algorithm is computing all possible paths, factoring in variables like traffic and road conditions, to provide you the optimal route.

Consider the world of social media, where algorithms curate personalized feeds based on our previous interactions, or in streaming platforms where they recommend shows and movies based on our viewing habits. Every click, every like, every search, and every interaction is processed by algorithms to serve you a seamless digital experience.

In the realm of computer science and beyond, everything revolves around problem-solving, and algorithms are our most reliable problem-solving tools. They provide a structured approach to problem-solving, breaking down complex problems into manageable steps and ensuring that every eventuality is accounted for.

Moreover, an algorithm’s efficiency is not just a matter of preference but a necessity. Given that computers have finite resources — time, memory, and computational power — the algorithms we use need to be optimized to make the best possible use of these resources. Efficient algorithms are the ones that can perform tasks more quickly, using less memory, and provide solutions to complex problems that might be infeasible with less efficient alternatives.

In the context of massive datasets (the likes of which are common in our data-driven world), the difference between a poorly designed algorithm and an efficient one could be the difference between a solution that takes years to compute and one that takes mere seconds. Therefore, understanding, designing, and implementing efficient algorithms is a critical skill for any computer scientist or software engineer.

Hence, as a computer science beginner, you are starting a journey where algorithms will be your best allies — universal keys capable of unlocking solutions to a myriad of problems, big or small.

Common Types of Algorithms: Searching and Sorting

Two of the most ubiquitous types of algorithms that beginners often encounter are searching and sorting algorithms.

Searching algorithms are designed to retrieve specific information from a data structure, like an array or a database. A simple example is the linear search, which works by checking each element in the array until it finds the one it’s looking for. Although easy to understand, this method isn’t efficient for large datasets, which is where more complex algorithms like binary search come in.

Binary search, on the other hand, is like looking up a word in the dictionary. Instead of checking each word from beginning to end, you open the dictionary in the middle and see if the word you’re looking for should be on the left or right side, thereby reducing the search space by half with each step.

Sorting algorithms, meanwhile, are designed to arrange elements in a particular order. A simple sorting algorithm is bubble sort, which works by repeatedly swapping adjacent elements if they’re in the wrong order. Again, while straightforward, it’s not efficient for larger datasets. More advanced sorting algorithms, such as quicksort or mergesort, have been designed to sort large data collections more efficiently.

Diving Deeper: Graph and Dynamic Programming Algorithms

Building upon our understanding of searching and sorting algorithms, let’s delve into two other families of algorithms often encountered in computer science: graph algorithms and dynamic programming algorithms.

A graph is a mathematical structure that models the relationship between pairs of objects. Graphs consist of vertices (or nodes) and edges (where each edge connects a pair of vertices). Graphs are commonly used to represent real-world systems such as social networks, web pages, biological networks, and more.

Graph algorithms are designed to solve problems centered around these structures. Some common graph algorithms include:

Dynamic programming is a powerful method used in optimization problems, where the main problem is broken down into simpler, overlapping subproblems. The solutions to these subproblems are stored and reused to build up the solution to the main problem, saving computational effort.

Here are two common dynamic programming problems:

Understanding these algorithm families — searching, sorting, graph, and dynamic programming algorithms — not only equips you with powerful tools to solve a variety of complex problems but also serves as a springboard to dive deeper into the rich ocean of algorithms and computer science.

Recursion: A Powerful Technique

While searching and sorting represent specific problem domains, recursion is a broad technique used in a wide range of algorithms. Recursion involves breaking down a problem into smaller, more manageable parts, and a function calling itself to solve these smaller parts.

To visualize recursion, consider the task of calculating factorial of a number. The factorial of a number n (denoted as n! ) is the product of all positive integers less than or equal to n . For instance, the factorial of 5 ( 5! ) is 5 x 4 x 3 x 2 x 1 = 120 . A recursive algorithm for finding factorial of n would involve multiplying n by the factorial of n-1 . The function keeps calling itself with a smaller value of n each time until it reaches a point where n is equal to 1, at which point it starts returning values back up the chain.

Algorithms are truly the heart of computer science, transforming raw data into valuable information and insight. Understanding their functionality and purpose is key to progressing in your computer science journey. As you continue your exploration, remember that each algorithm you encounter, no matter how complex it may seem, is simply a step-by-step procedure to solve a problem.

We’ve just scratched the surface of the fascinating world of algorithms. With time, patience, and practice, you will learn to create your own algorithms and start solving problems with confidence and efficiency.

Related Articles

algorithm problem solving example

Three Elegant Algorithms Every Computer Science Beginner Should Know

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Using AI tools
  • What Is an Algorithm? | Definition & Examples

What Is an Algorithm? | Definition & Examples

Published on August 9, 2023 by Kassiani Nikolopoulou . Revised on August 29, 2023.

An algorithm is a set of steps for accomplishing a task or solving a problem. Typically, algorithms are executed by computers, but we also rely on algorithms in our daily lives. Each time we follow a particular step-by-step process, like making coffee in the morning or tying our shoelaces, we are in fact following an algorithm.

In the context of computer science , an algorithm is a mathematical process for solving a problem using a finite number of steps. Algorithms are a key component of any computer program and are the driving force behind various systems and applications, such as navigation systems, search engines, and music streaming services.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is an algorithm, how do algorithms work, examples of algorithms, other interesting articles, frequently asked questions about algorithms.

An algorithm is a sequence of instructions that a computer must perform to solve a well-defined problem. It essentially defines what the computer needs to do and how to do it. Algorithms can instruct a computer how to perform a calculation, process data, or make a decision.

The best way to understand an algorithm is to think of it as a recipe that guides you through a series of well-defined actions to achieve a specific goal. Just like a recipe produces a replicable result, algorithms ensure consistent and reliable outcomes for a wide range of tasks in the digital realm.

And just like there are numerous ways to make, for example, chocolate chip cookies by following different steps or using slightly different ingredients, different algorithms can be designed to solve the same problem, with each taking a distinct approach but achieving the same result.

Algorithms are virtually everywhere around us. Examples include the following:

  • Search engines rely on algorithms to find and present relevant results as quickly as possible
  • Social media platforms use algorithms to prioritize the content that we see in our feeds, taking into account factors like our past behavior, the popularity of posts, and relevance.
  • With the help of algorithms, navigation apps determine the most efficient route for us to reach our destination.
  • It must be correct . In other words, it should take a given problem and provide the right answer or result, even if it stops working due to an error.
  • It must consist of clear, practical steps that can be completed in a limited time, whether by a person or the machine that must execute the algorithm. For example, the instructions in a cookie recipe might be considered sufficiently concrete for a human cook, but they would not be specific enough for programming an automated cookie-making machine.
  • There should be no confusion about which step comes next , even if choices must be made (e.g., when using “if” statements).
  • It must have a set number of steps (not an infinite number) that can be managed using loops (statements describing repeated actions or iterations).
  • It must eventually reach an endpoint and not get stuck in a never-ending loop.

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

Algorithms use a set of initial data or input , process it through a series of logical steps or rules, and produce the output (i.e., the outcome, decision, or result).

Algorithm boxes

If you want to make chocolate chip cookies, for instance, the input would be the ingredients and quantities, the process would be the recipe you choose to follow, and the output would be the cookies.

Algorithms are eventually expressed in a programming language that a computer can process. However, when an algorithm is being created, it will be people, not a computer, who will need to understand it. For this reason, as a first step, algorithms are written as plain instructions.

  • Input: the input data is a single-digit number (e.g., 5).
  • Transformation/processing: the algorithm takes the input (number 5) and performs the specific operation (i.e., multiplies the number by itself).
  • Output: the result of the calculation is the square of the input number, which, in this case, would be 25 (since 5 * 5 = 25).

We could express this as an algorithm in the following way:

Algorithm: Calculate the square of a number

  • Input the number (N) whose square you want to find.
  • Multiply the number (N) by itself.
  • Store the result of the multiplication in a variable (result).
  • Output the value of the variable (result), which represents the square of the input number.

It is important to keep in mind that an algorithm is not the same as a program or code. It is the logic or plan for solving a problem represented as a simple step-by-step description. Code is the implementation of the algorithm in a specific programming language (like C++ or Python), while a program is an implementation of code that instructs a computer on how to execute an algorithm and perform a task.

Instead of telling a computer exactly what to do, some algorithms allow computers to learn on their own and improve their performance on a specific task. These machine learning algorithms use data to identify patterns and make predictions or conduct data mining to uncover hidden insights in data that can inform business decisions.

Broadly speaking, there are three different types of algorithms:

  • Linear sequence algorithms follow a specific set or steps, one after the other. Just like following a recipe, each step depends on the success of the previous one.
  • For example, in the context of a cookie recipe, you would include the step “if the dough is too sticky, you might need to refrigerate it.”
  • For example, a looping algorithm could be used to handle the process of making multiple cookies from a single batch of dough. The algorithm would repeat a specific set of instructions to form and bake cookies until all the dough has been used.

Algorithms are fundamental tools for problem-solving in both the digital world and many real-life scenarios. Each time we try to solve a problem by breaking it down into smaller, manageable steps, we are in fact using algorithmic thinking.

  • Identify which clothes are clean.
  • Consider the weather forecast for the day.
  • Consider the occasion for which you are getting dressed (e.g., work or school etc.).
  • Consider personal preferences (e.g., style or which items match).

In mathematics, algorithms are standard methods for performing calculations or solving equations because they are efficient, reliable, and applicable to various situations.

Suppose you want to add the numbers 345 and 278. You would follow a set of steps (i.e., the standard algorithm for addition):

  • Write down the numbers so the digits align.
  • Start from the rightmost digits (the ones place) and add them together: 5 + 8 = 13. Write down the 3 and carry over the 1 to the next column.
  • Move to the next column (the tens place) and add the digits along with the carried-over value: 4 + 7 + 1 = 12. Write down the 2 and carry over the 1 to the next column.
  • Move to the leftmost column (the hundreds place) and add the digits along with the carried-over value: 3 + 2 + 1 = 6. Write down the 6.

The final result is 623

Algorithm calculation example

Navigation systems are another example of the use of algorithms. Such systems use algorithms to help you find the easiest and fastest route to your destination while avoiding traffic jams and roadblocks.

If you want to know more about ChatGPT, AI tools , fallacies , and research bias , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • Sunk cost fallacy
  • Straw man fallacy
  • Slippery slope fallacy
  • Red herring fallacy
  • Ecological fallacy
  • Logical fallacy

Research bias

  • Implicit bias
  • Framing bias
  • Cognitive bias
  • Optimism bias
  • Hawthorne effect
  • Unconscious bias

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

algorithm problem solving example

Try for free

In computer science, an algorithm is a list of unambiguous instructions that specify successive steps to solve a problem or perform a task. Algorithms help computers execute tasks like playing games or sorting a list of numbers. In other words, computers use algorithms to understand what to do and give you the result you need.

Algorithms and artificial intelligence (AI) are not the same, however they are closely related.

  • Artificial intelligence is a broad term describing computer systems performing tasks usually associated with human intelligence like decision-making, pattern recognition, or learning from experience.
  • Algorithms are the instructions that AI uses to carry out these tasks, therefore we could say that algorithms are the building blocks of AI—even though AI involves more advanced capabilities beyond just following instructions.

Algorithms and computer programs are sometimes used interchangeably, but they refer to two distinct but interrelated concepts.

  • An algorithm is a step-by-step instruction for solving a problem that is precise yet general.
  • Computer programs are specific implementations of an algorithm in a specific programming language. In other words, the algorithm is the high-level description of an idea, while the program is the actual implementation of that idea.

Algorithms are valuable to us because they:

  • Form the basis of much of the technology we use in our daily lives, from mobile apps to search engines.
  • Power innovations in various industries that augment our abilities (e.g., AI assistants or medical diagnosis).
  • Help analyze large volumes of data, discover patterns and make informed decisions in a fast and efficient way, at a scale humans are simply not able to do.
  • Automate processes. By streamlining tasks, algorithms increase efficiency, reduce errors, and save valuable time.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Nikolopoulou, K. (2023, August 29). What Is an Algorithm? | Definition & Examples. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/ai-tools/what-is-an-algorithm/

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, what is deep learning | a beginner's guide, what is data mining | definition & techniques, what is machine learning | a beginner's guide, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Is an Algorithm in Psychology?

Definition, Examples, and Uses

How Does an Algorithm Work?

Examples of algorithms.

  • Reasons to Use Algorithms
  • Potential Pitfalls

Algorithms vs. Heuristics

When solving a problem , choosing the right approach is often the key to arriving at the best solution. In psychology, one of these problem-solving approaches is known as an algorithm. While often thought of purely as a mathematical term, the same type of process can be followed in psychology to find the correct answer when solving a problem or making a decision.

An algorithm is a defined set of step-by-step procedures that provides the correct answer to a particular problem. By following the instructions correctly, you are guaranteed to arrive at the right answer.

At a Glance

Algorithms involve following specific steps in order to reach a solution to a problem. They can be a great tool when you need an accurate solution but tend to be more time-consuming than other methods.

This article discusses how algorithms are used as an approach to problem-solving. It also covers how psychologists compare this approach to other problem-solving methods.

An algorithm is often expressed in the form of a graph, where a square represents each step. Arrows then branch off from each step to point to possible directions that you may take to solve the problem.

In some cases, you must follow a particular set of steps to solve the problem. In other instances, you might be able to follow different paths that will all lead to the same solution.

Algorithms are essential step-by-step approaches to solving a problem. Rather than guessing or using trial-and-error, this approach is more likely to guarantee a specific solution. 

Using an algorithm can help you solve day-to-day problems you face, but it can also help mental health professionals find ways to help people cope with mental health problems.

For example, a therapist might use an algorithm to treat a person experiencing something like anxiety. Because the therapist knows that a particular approach is likely to be effective, they would recommend a series of specific, focused steps as part of their intervention.

There are many different examples of how algorithms can be used in daily life. Some common ones include:

  • A recipe for cooking a particular dish
  • The method a search engine uses to find information on the internet
  • Instructions for how to assemble a bicycle
  • Instructions for how to solve a Rubik's cube
  • A process to determine what type of treatment is most appropriate for certain types of mental health conditions

Doctors and mental health professionals often use algorithms to diagnose mental disorders . For example, they may use a step-by-step approach when they evaluate people.

This might involve asking the individual about their symptoms and their medical history. The doctor may also conduct lab tests, physical exams, or psychological assessments.

Using this information, they then utilize the "Diagnostic and Statistical Manual of Mental Disorders" (DSM-5-TR) to make a diagnosis.

Reasons to Use Algorithms in Psychology

The upside of using an algorithm to solve a problem or make a decision is that yields the best possible answer every time. There are situations where using an algorithm can be the best approach:

When Accuracy Is Crucial

Algorithms can be particularly useful in situations when accuracy is critical. They are also a good choice when similar problems need to be frequently solved.

Computer programs can often be designed to speed up this process. Data then needs to be placed in the system so that the algorithm can be executed for the correct solution.

Artificial intelligence may also be a tool for making clinical assessments in healthcare situations.

When Each Decision Needs to Follow the Same Process

Such step-by-step approaches can be useful in situations where each decision must be made following the same process. Because the process follows a prescribed procedure, you can be sure that you will reach the correct answer each time.

Potential Pitfalls When Using Algorithms

The downside of using an algorithm to solve the problem is that this process tends to be very time-consuming.

So if you face a situation where a decision must be made very quickly, you might be better off using a different problem-solving strategy.

For example, an emergency room doctor making a decision about how to treat a patient could use an algorithm approach. However, this would be very time-consuming and treatment needs to be implemented quickly.

In this instance, the doctor would instead rely on their expertise and past experiences to very quickly choose what they feel is the right treatment approach.

Algorithms can sometimes be very complex and may only apply to specific situations. This can limit their use and make them less generalizable when working with larger populations.

Algorithms can be a great problem-solving choice when the answer needs to be 100% accurate or when each decision needs to follow the same process. A different approach might be needed if speed is the primary concern.

In psychology, algorithms are frequently contrasted with heuristics . Both can be useful when problem-solving, but it is important to understand the differences between them.

What Is a Heuristic?

A heuristic is a mental shortcut that allows people to quickly make judgments and solve problems.

These mental shortcuts are typically informed by our past experiences and allow us to act quickly. However, heuristics are really more of a rule-of-thumb; they don't always guarantee a correct solution.

So how do you determine when to use a heuristic and when to use an algorithm? When problem-solving, deciding which method to use depends on the need for either accuracy or speed.

When to Use an Algorithm

If complete accuracy is required, it is best to use an algorithm. By using an algorithm, accuracy is increased and potential mistakes are minimized.

If you are working in a situation where you absolutely need the correct or best possible answer, your best bet is to use an algorithm. When you are solving problems for your math homework, you don't want to risk your grade on a guess.

By following an algorithm, you can ensure that you will arrive at the correct answer to each problem.

When to Use a Heuristic

On the other hand, if time is an issue, then it may be best to use a heuristic. Mistakes may occur, but this approach allows for speedy decisions when time is of the essence.

Heuristics are more commonly used in everyday situations, such as figuring out the best route to get from point A to point B. While you could use an algorithm to map out every possible route and determine which one would be the fastest, that would be a very time-consuming process. Instead, your best option would be to use a route that you know has worked well in the past.

Psychologists who study problem-solving have described two main processes people utilize to reach conclusions: algorithms and heuristics. Knowing which approach to use is important because these two methods can vary in terms of speed and accuracy.

While each situation is unique, you may want to use an algorithm when being accurate is the primary concern. But if time is of the essence, then an algorithm is likely not the best choice.

Lang JM, Ford JD, Fitzgerald MM. An algorithm for determining use of trauma-focused cognitive-behavioral therapy . Psychotherapy (Chic) . 2010;47(4):554-69. doi:10.1037/a0021184

Stein DJ, Shoptaw SJ, Vigo DV, et al. Psychiatric diagnosis and treatment in the 21st century: paradigm shifts versus incremental integration .  World Psychiatry . 2022;21(3):393-414. doi:10.1002/wps.20998

Bobadilla-Suarez S, Love BC. Fast or frugal, but not both: decision heuristics under time pressure . J Exp Psychol Learn Mem Cogn . 2018;44(1):24-33. doi:10.1037/xlm0000419

Giordano C, Brennan M, Mohamed B, Rashidi P, Modave F, Tighe P. Accessing artificial intelligence for clinical decision-making .  Front Digit Health . 2021;3:645232. doi:10.3389/fdgth.2021.645232

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

DEV Community

DEV Community

Ezra Schwepker

Posted on Apr 10, 2019

Algorithm Problem Solving Strategies

Algorithm puzzles are an unfortunately common way to weed out candidates in the application process. However, when you aren't stressing over the job search, they're actually a lot of fun, like crossword puzzles for coders.

Solving them presents unique challenges that you won't encounter when spinning up Yet Another Crud App and exposes you to concepts that you might not already be familiar with. Practicing algorithm challenges will improve your broader problem solving abilities, as well as cement a problem solving process that is more generically useful.

Much like other types of puzzles, there are strategies that give you an early foothold into the problem and a way to break it down into smaller, more approachable chunks. With an actual puzzle, you might sort the pieces into coherent groups of similar colors or features, find look for obvious matches and grow outwards. With minesweeper, you might start with a random click, and then move around the exposed edges, marking off obvious mines and exposes obvious clear areas, only clicking randomly once more when you've exhausted all possibilities.

While there are strategies for approaching similar kinds of algorithms, I would recommend that you start with a broader problem solving strategy that you embrace as a general habit, and not just when you're grinding LeetCode as interview prep.

Be Strategic, Think First

Rather than diving in, approach the problem in stages:

  • Analyze the problem
  • Restate the problem
  • Write out examples of input and output
  • Break the problem into its component parts
  • Outline a solution in psuedo-code
  • Step through your example data with your psuedo-code
  • Test your solution against your examples

(If you're familiar with this approach, skip down to Algorithm Patterns)

Analyze the Problem

You may have had a flash of insight when you first saw the problem. This is typically your mind connecting to some prior experience. Hold on to that insight! However, you should still spend some time looking at the problem, particularly for ways that your insight differs from the actual question.

Any well written puzzle has the components needed to answer the problem contained in a sentence or two. However just because you read the problem doesn't mean you understand the problem. And if you don't understand the problem, you will stumble about aimlessly, or solve what you assumed the problem was.

Look for key words which determine the shape of the challenge.

Identify the input. Identify the desired output. Identify critical keywords and phrases.

For example, LeetCode #26

Given a [sorted] [array] nums, [remove the duplicates] [in-place] such that each element appear only once and [return the new length]. Do not allocate extra space for another array, you must do this by modifying the input array in-place with [O(1) extra memory].
  • an array. So we know we're probably going to do some kind of iteration.
  • an array of numbers. That's more implied rather than specifically stated and doesn't really matter as much as we can use the same set of conditionals.
  • length of altered array
  • side effect: a modified array

Critical Words and Phrases:

  • sorted: repeated elements will be next to each other
  • remove...duplicates
  • in-place: the array itself must be destructively modified. This constraint dictates what array methods we may use.
  • O(1) extra memory: we are limited to O(1) space complexity, allowing us to define variables, but not create a copy of the array.

Restate the Problem

Now, in our own words, rephrase this so it is meaningful to ourselves. If you are in an interviewing situation, you may restate it back to the interviewer, both to set it in your own mind, and to make sure that you heard and understood them correctly.

Given a sorted array of numbers, passed by reference, destructively modify the original array in-place by removing duplicate, so that each value only appears once. Return the length of the modified array.

Write out Example Inputs and Expected Outputs

All we are doing is mapping inputs to outputs. The challenge is figuring out how to get from A to B, however first we need to establish what A and B are . Even if you are given test cases, write your own. Looking at something doesn't create nearly as much understanding as doing it for yourself.

This is also a great time to explore your understanding of the problem and look for quirks that might trip up a naive solution. That includes edge cases like empty inputs, an array filled with duplicates of the same value, a massive data set, etc. We don't need to worry about anything outside of the constraints of the problem

Write out at least 3 examples:

Given the inputs, do you have enough information to map to the result? If you don't, take a step back and continue examining the problem before continuing. If you are interviewing, feel free to ask for clarification.

Look for a simple and consistent process that you can apply regardless of the value to reach the outcome. If you end up with a convoluted series of steps and exceptions, you've likely gone too far and passed over a simpler solution.

Break the Problem into Small Parts

Starting with the simplest possible example, simply the problem down to an essential puzzle and build upon that. In this case, that is an array of three elements, two duplicated, e.g. [1, 1, 2] . Reducing the problem to such a small case makes it more approachable and clarifies the the first step you need to take. Your task from there is to develop a procedure which solves that simple case and holds true for all other cases in the problem set.

So we know we need to do a couple things:

  • Iterate through an array
  • Keep track of where in the array we are
  • Check adjacent values for equality
  • Destructively remove any duplicate values after the first occurrence
  • Get the final array length and return it

This is a relatively simple example problem, however there is a gotcha lurking in it: lots of iteration methods don't play nicely with removing elements from the array while you're iterating through the array, because the index values change. You may end up skipping a duplicate because the pointer incremented over it.

This gotcha indicates that we'll want to use an approach that gives us explicit control of iteration.

In a more involved problem, we might consider some or all of these components into helper functions, allowing us to write a clear and succinct solution, as well as test the validity of each of our sub-parts separately.

Psuedocode the Solution

If we've clearly grasped the problem, identified the core tasks and hopefully spotted the flaws in our own assumptions and any gotchas, we write out a human-legible description of what our approach will be. Hopefully once we've done that we can cleanly transform it into working code.

How you write pseudocode is up to you. Your notation doesn't need to be perfectly spelled and grammatically correct. It can be a gestural combination of code and words meant to convey meaning. Your pseudocode will provide a meaningful roadmap that you can refer back to if you find yourself lost deep in the particulars of implementation, so make sure that you record enough to be useful later.

If you're interviewing, this is a great opportunity to walk the interviewer through your intent. And if you're running out of time, you will at least have something on the board that demonstrates your problem solving approach.

Recommendations:

  • start with a function signature: removeDuplicates :: (Array) -> number
  • if whiteboarding, leave plenty of space to write your actual code
  • if using an IDE, write comments and keep them separate from your code so you can reference back later on.
  • write it as a series of steps and use bullet points

Since we're looking for duplicates, that means we need to perform a comparison. We can look ahead of our current position in the array, or we can look behind.

We start by exiting if our array is only 0 or 1 elements in size, partly because these cases satisfy the conditions of the problem: there are no duplicates possible, and partly because they'll break our code if we try and compare the first value with a second that doesn't exist.

We also establish our iteration exit condition, and because we'll be using a look-ahead, we make sure to stop before we reach the last element.

Because we don't move our pointer position until after we've dealt with any duplicates, we should be clear of the shifting indices issue.

Step Through the Sample Data

Take a moment and mentally run some of the sample data through our pseudocode:

Are we missing anything?

There may be an issue with the last example: [1, 1, 1, 1, 1] . What happens if we remove all of the duplicates, and then try and move on to the next element in our array without checking to see if there are any?

We'll want to make sure that our end condition catches any changes in the array length.

Time for rubber to meet the road. This is where you have all of the assumptions you didn't even know you made come back to haunt you. The better you were able to plan, the fewer there will be.

Personally I like to put in my return values first. That way I'm clear on what my goal is, and I've also captured the first case of empty or single element arrays.

Yup, we're going with a standard for-loop. I prefer not to use them if there's more appropriate or cleaner syntax, but for this particular problem, we need the ability to control our iteration.

And that works out of the box, except for:

Turns out the existence check I snuck in the while loop resolves to falsy if the array value is 0 . Thanks JavaScript! So let's just rework that real quick and do a look behind instead of look ahead, which actually cleans the code up a tad as well:

And that passes. It's a memory efficient solution, we've only defined 1 variable besides the array reference. And it is of average speed, which we could improve upon.

But mostly it was a simple example of a process:

  • Write examples
  • Break into small problems
  • Outline in pseudocode
  • Step through pseudocode with examples

Algorithm Patterns

Aside from specific data structures and algorithms which have known and fairly standardized approaches, algorithm challenges tend to fall into categories that suggest similar solution approaches. Learning these approaches gives you a foothold into the problem.

Multiple Pointers

When we first learn to iterate through a collection, typically an array, we do so with a single pointer with an index going from the lowest value to the highest. This works for some operations and is simple to consider and code. However for problems involving comparing multiple elements, particularly ones where their position in the collection is important, finding corresponding value with a single pointer requires iterating through the array at least once for each value, an O(n 2) operation.

If instead we use multiple multiple points we can potentially reduce the computation down to an O(n) operation.

There are two common strategies: Two Pointer and Sliding Window

Two Pointer

Why not start at both ends and work your way in? Or start at a value or pair of values and expand outwards. This is a great approach for finding the largest sequence in a collection.

Because you are handling two points, you will need to define a rule to ensure that they do not cross over each other.

Sliding Window

Instead of placing two points at the outer bounds, we can march through our array sequentially moving two pointers in parallel. The width of our window may grow or shrink according to the problem set, but it continues to progress across the collection, capturing a snapshot of whatever sequence best fits the desired outcome.

Divide and Conquer

Divide and Conquer often involves a recursive approach: applying the same rule to divide a collection until you've broken it down into the smallest components and identify the answer.

Binary Search and Merge Sort are both excellent examples of recursive subdivision leading to a solution.

O(1) Lookup: Object/Dictionary/Hash

Hashed Key:Value stores, called Objects, Dictionaries or Hashes depending upon your coding language, are incredibly useful tools for storing information when counting frequency, checking for duplicates or the complement of an answer. As there

You may store the value, or you may store the value that you are looking for instead. For example, when looking for zero-sum pairs in an array, we can save the complement rather than the value itself.

Fundamentals of Algorithmic Problem Solving (video) Algorithmic Problem Solving for Programmers Top 10 Algorithms in Interview Questions Improving your Algorithms and Data Structure Skills

Top comments (3)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

yangbahn profile image

  • Location New York, NY
  • Education Cooper Union BFA, NYU MFA - Visual Art
  • Joined Jan 21, 2021

This is a great break down of how to approach solving algorithm problems. I really appreciated the step by step explanations and concrete examples. Thanks!

maxoralbay profile image

  • Location Kazakhstan, Shymkent
  • Work Middle Software developer
  • Joined Feb 18, 2021

Thanks! It is a great tutor how to start solve the problem! And don't worry if someone said about typo in the words. The context is important! I will keep your strategic as main for myself

ericdouglas profile image

  • Joined Feb 13, 2020

I did not finish the reading yet but just want to point that in several places you have a typo in the word Pseudocode . Do a search using the word "psuedo" and you will find it in 3 places.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

Advanced Ansible Techniques and Real-World Applications: Day 31 of 50 days DevOps Tools Series

Shiivam Agnihotri - Aug 23

benjaminraffetseder profile image

Simplifying Async Error Handling in TypeScript

Benjamin Raffetseder - Aug 23

mahf001 profile image

Iteration Stament i.e for-of loop

Mohammed Ali - Aug 23

fizza_c3e734ee2a307cf35e5 profile image

How to Interpret a Confusion Matrix: Key Metrics and Their Significance

Fizza - Aug 23

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9. Cognition

Problem-Solving: Heuristics and Algorithms

Dinesh Ramoo

Approximate reading time : 11 minutes

Learning Objectives

By the end of this section, you will be able to:

  • Describe the differences between heuristics and algorithms in information processing

When faced with a problem to solve, should you go with intuition or with more measured, logical reasoning? Obviously, we use both of these approaches. Some of the decisions we make are rapid, emotional, and automatic. Daniel Kahneman (2011) calls this “fast” thinking. By definition, fast thinking saves time. For example, you may quickly decide to buy something because it is on sale; your fast brain has perceived a bargain, and you go for it quickly. On the other hand, “slow” thinking requires more effort; applying this in the same scenario might cause us not to buy the item because we have reasoned that we don’t really need it, that it is still too expensive, and so on. Using slow and fast thinking does not guarantee good decision-making if they are employed at the wrong time. Sometimes it is not clear which is called for, because many decisions have a level of uncertainty built into them. In this section, we will explore some of the applications of these tendencies to think fast or slow.

We will look further into our thought processes, more specifically, into some of the problem-solving strategies that we use. Heuristics are information-processing strategies that are useful in many cases but may lead to errors when misapplied. A heuristic is a principle with broad application, essentially an educated guess about something. We use heuristics all the time, for example, when deciding what groceries to buy from the supermarket, when deciding what to wear before going out, when choosing the best route to drive through town to avoid traffic congestion, and so on. Heuristics can be thought of as aids to decision making; they allow us to reach a solution without a lot of cognitive effort or time.

The benefit of heuristics in helping us reach decisions fairly easily is also the potential downfall: the solution provided by the use of heuristics is not necessarily the best one. Let’s consider some of the most frequently applied, and misapplied, heuristics in Table CO.3 below.

Table CO.3. Heuristics that pose threats to accuracy.
Heuristic Description Examples of Threats to Accuracy
Representativeness A judgment that something that is more representative of its category is more likely to occur We may overestimate the likelihood that a person belongs to a particular category because they resemble our prototype of that category.
Availability A judgment that what comes easily to mind is common We may overestimate the crime statistics in our own area because these crimes are so easy to recall.
Anchoring and adjustment A tendency to use a given starting point as the basis for a subsequent judgment We may be swayed towards or away from decisions based on the starting point, which may be inaccurate.

In many cases, we base our judgments on information that seems to represent, or match, what we expect will happen, while ignoring other potentially more relevant statistical information. When we do so, we are using the representativeness heuristic . Consider, for instance, the data presented in the lists below. Let’s say that you went to a hospital, and you checked the records of the babies that were born on that given day. Which pattern of births do you think you are most likely to find?

  • 6:31 a.m. – Girl
  • 8:15 a.m. – Girl
  • 9:42 a.m. – Girl
  • 1:13 p.m. – Girl
  • 3:39 p.m. – Boy
  • 5:12 p.m. – Boy
  • 7:42 p.m. – Boy
  • 11:44 p.m. – Boy
  • 6:31 a.m. – Boy
  • 9:42 a.m. – Boy
  • 3:39 p.m. – Girl
  • 7:42 p.m. – Girl

Using the representativeness heuristic may lead us to incorrectly believe that some patterns of observed events are more likely to have occurred than others.

Most people think that list B is more likely, probably because list B looks more random, and matches — or is “representative of” — our ideas about randomness, but statisticians know that any pattern of four girls and four boys is mathematically equally likely. Whether a boy or girl is born first has no bearing on what sex will be born second; these are independent events, each with a 50:50 chance of being a boy or a girl. The problem is that we have a schema of what randomness should be like, which does not always match what is mathematically the case. Similarly, people who see a flipped coin come up “heads”five times in a row will frequently predict, and perhaps even wager money, that “tails” will be next. This behaviour is known as the gambler’s fallacy . Mathematically, the gambler’s fallacy is an error;: the likelihood of any single coin flip being “tails” is always 50%, regardless of how many times it has come up “heads” in the past. Think of how people choose lottery numbers. People who follow the lottery often have the impression that numbers that were chosen before are unlikely to reoccur even though the selection of numbers is random and not dependent on previous events. But the gambler’s fallacy is so strong in most people that they follow this logic all the time.

The representativeness heuristic may explain why we judge people on the basis of appearance. Suppose you meet your new next-door neighbour, who drives a loud motorcycle, has many tattoos, wears leather, and has long hair. Later, you try to guess their occupation. What comes to mind most readily? Are they a teacher? Insurance salesman? IT specialist? Librarian? Drug dealer? The representativeness heuristic will lead you to compare your neighbour to the prototypes you have for these occupations and choose the one that they seem to represent the best. Thus, your judgment is affected by how much your neighbour seems to resemble each of these groups. Sometimes these judgments are accurate, but they often fail because they do not account for base rates , which is the actual frequency with which these groups exist. In this case, the group with the lowest base rate is probably drug dealer.

Our judgments can also be influenced by how easy it is to retrieve a memory. The tendency to make judgments of the frequency or likelihood that an event occurs on the basis of the ease with which it can be retrieved from memory is known as the availability heuristic (MacLeod & Campbell, 1992; Tversky & Kahneman, 1973). Imagine, for instance, that I asked you to indicate whether there are more words in the English language that begin with the letter “R” or that have the letter “R” as the third letter. You would probably answer this question by trying to think of words that have each of the characteristics, thinking of all the words you know that begin with “R” and all that have “R” in the third position. Because it is much easier to retrieve words by their first letter than by their third, we may incorrectly guess that there are more words that begin with “R,” even though there are in fact more words that have “R” as the third letter.

The availability heuristic may explain why we tend to overestimate the likelihood of crimes or disasters; those that are reported widely in the news are more readily imaginable, and therefore, we tend to overestimate how often they occur. Things that we find easy to imagine, or to remember from watching the news, are estimated to occur frequently. Anything that gets a lot of news coverage is easy to imagine. Availability bias does not just affect our thinking. It can change behaviour. For example, homicides are usually widely reported in the news, leading people to make inaccurate assumptions about the frequency of murder. In Canada, the murder rate has dropped steadily since the 1970s (Statistics Canada, 2018), but this information tends not to be reported, leading people to overestimate the probability of being affected by violent crime. In another example, doctors who recently treated patients suffering from a particular condition were more likely to diagnose the condition in subsequent patients because they overestimated the prevalence of the condition (Poses & Anthony, 1991). After the attack on 9/11, more people died from car accidents because more people drove instead of flying in a plane. Even though the attack was an isolated incident and even though flying is statistically safer than driving, the mere fact of seeing the attack on television again and again made people think that flying was more dangerous than it is.

The anchoring and adjustment heuristic is another example of how fast thinking can lead to a decision that might not be optimal. Anchoring and adjustment is easily seen when we are faced with buying something that does not have a fixed price. For example, if you are interested in a used car, and the asking price is $10,000, what price do you think you might offer? Using $10,000 as an anchor, you are likely to adjust your offer from there, and perhaps offer $9000 or $9500. Never mind that $10,000 may not be a reasonable anchoring price. Anchoring and adjustment happens not just when we’re buying something. It can also be used in any situation that calls for judgment under uncertainty, such as sentencing decisions in criminal cases (Bennett, 2014), and it applies to groups as well as individuals (Rutledge, 1993).

In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your previous baking experience and guessing at the number and amount of ingredients, baking time, and so on — or using an algorithm. The latter would require a recipe which would provide step-by-step instructions; the recipe is the algorithm. Unless you are an extremely accomplished baker, the algorithm should provide you with a better cake than using heuristics would. While heuristics offer a solution that might be correct, a correctly applied algorithm is guaranteed to provide a correct solution. Of course, not all problems can be solved by algorithms.

As with heuristics, the use of algorithmic processing interacts with behaviour and emotion. Understanding what strategy might provide the best solution requires knowledge and experience. As we will see in the next section, we are prone to a number of cognitive biases that persist despite knowledge and experience.

To calculate this time, we used a reading speed of 150 words per minute and then added extra time to account for images and videos. This is just to give you a rough idea of the length of the chapter section. How long it will take you to engage with this chapter will vary greatly depending on all sorts of things (the complexity of the content, your ability to focus, etc).

Problem-Solving: Heuristics and Algorithms Copyright © 2024 by Dinesh Ramoo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Chapter: Introduction to the Design and Analysis of Algorithms

Fundamentals of Algorithmic Problem Solving

Let us start by reiterating an important point made in the introduction to this chapter:

We can consider algorithms to be procedural solutions to problems.

These solutions are not answers but specific instructions for getting answers. It is this emphasis on precisely defined constructive procedures that makes computer science distinct from other disciplines. In particular, this distinguishes it from the-oretical mathematics, whose practitioners are typically satisfied with just proving the existence of a solution to a problem and, possibly, investigating the solution’s properties.

We now list and briefly discuss a sequence of steps one typically goes through in designing and analyzing an algorithm (Figure 1.2).

Understanding the Problem

From a practical perspective, the first thing you need to do before designing an algorithm is to understand completely the problem given. Read the problem’s description carefully and ask questions if you have any doubts about the problem, do a few small examples by hand, think about special cases, and ask questions again if needed.

There are a few types of problems that arise in computing applications quite often. We review them in the next section. If the problem in question is one of them, you might be able to use a known algorithm for solving it. Of course, it helps to understand how such an algorithm works and to know its strengths and weaknesses, especially if you have to choose among several available algorithms. But often you will not find a readily available algorithm and will have to design your own. The sequence of steps outlined in this section should help you in this exciting but not always easy task.

An input to an algorithm specifies an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. (As an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.) If you fail to do this, your algorithm may work correctly for a majority of inputs but crash on some “boundary” value. Remember that a correct algorithm is not one that works most of the time, but one that works correctly for all legitimate inputs.

Do not skimp on this first step of the algorithmic problem-solving process; otherwise, you will run the risk of unnecessary rework.

Ascertaining the Capabilities of the Computational Device

Once you completely understand a problem, you need to ascertain the capabilities of the computational device the algorithm is intended for. The vast majority of 

algorithm problem solving example

algorithms in use today are still destined to be programmed for a computer closely resembling the von Neumann machine—a computer architecture outlined by the prominent Hungarian-American mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and H. Goldstine, in 1946. The essence of this architecture is captured by the so-called random-access machine ( RAM ). Its central assumption is that instructions are executed one after another, one operation at a time. Accordingly, algorithms designed to be executed on such machines are called sequential algorithms .

The central assumption of the RAM model does not hold for some newer computers that can execute operations concurrently, i.e., in parallel. Algorithms that take advantage of this capability are called parallel algorithms . Still, studying the classic techniques for design and analysis of algorithms under the RAM model remains the cornerstone of algorithmics for the foreseeable future.

Should you worry about the speed and amount of memory of a computer at your disposal? If you are designing an algorithm as a scientific exercise, the answer is a qualified no. As you will see in Section 2.1, most computer scientists prefer to study algorithms in terms independent of specification parameters for a particular computer. If you are designing an algorithm as a practical tool, the answer may depend on a problem you need to solve. Even the “slow” computers of today are almost unimaginably fast. Consequently, in many situations you need not worry about a computer being too slow for the task. There are important problems, however, that are very complex by their nature, or have to process huge volumes of data, or deal with applications where the time is critical. In such situations, it is imperative to be aware of the speed and memory available on a particular computer system.

Choosing between Exact and Approximate Problem Solving

The next principal decision is to choose between solving the problem exactly or solving it approximately. In the former case, an algorithm is called an exact algo-rithm ; in the latter case, an algorithm is called an approximation algorithm . Why would one opt for an approximation algorithm? First, there are important prob-lems that simply cannot be solved exactly for most of their instances; examples include extracting square roots, solving nonlinear equations, and evaluating def-inite integrals. Second, available algorithms for solving a problem exactly can be unacceptably slow because of the problem’s intrinsic complexity. This happens, in particular, for many problems involving a very large number of choices; you will see examples of such difficult problems in Chapters 3, 11, and 12. Third, an ap-proximation algorithm can be a part of a more sophisticated algorithm that solves a problem exactly.

Algorithm Design Techniques

Now, with all the components of the algorithmic problem solving in place, how do you design an algorithm to solve a given problem? This is the main question this book seeks to answer by teaching you several general design techniques.

What is an algorithm design technique?

An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing.

Check this book’s table of contents and you will see that a majority of its chapters are devoted to individual design techniques. They distill a few key ideas that have proven to be useful in designing algorithms. Learning these techniques is of utmost importance for the following reasons.

First, they provide guidance for designing algorithms for new problems, i.e., problems for which there is no known satisfactory algorithm. Therefore—to use the language of a famous proverb—learning such techniques is akin to learning to fish as opposed to being given a fish caught by somebody else. It is not true, of course, that each of these general techniques will be necessarily applicable to every problem you may encounter. But taken together, they do constitute a powerful collection of tools that you will find quite handy in your studies and work.

Second, algorithms are the cornerstone of computer science. Every science is interested in classifying its principal subject, and computer science is no exception. Algorithm design techniques make it possible to classify algorithms according to an underlying design idea; therefore, they can serve as a natural way to both categorize and study algorithms.

Designing an Algorithm and Data Structures

While the algorithm design techniques do provide a powerful set of general ap-proaches to algorithmic problem solving, designing an algorithm for a particular problem may still be a challenging task. Some design techniques can be simply inapplicable to the problem in question. Sometimes, several techniques need to be combined, and there are algorithms that are hard to pinpoint as applications of the known design techniques. Even when a particular design technique is ap-plicable, getting an algorithm often requires a nontrivial ingenuity on the part of the algorithm designer. With practice, both tasks—choosing among the general techniques and applying them—get easier, but they are rarely easy.

Of course, one should pay close attention to choosing data structures appro-priate for the operations performed by the algorithm. For example, the sieve of Eratosthenes introduced in Section 1.1 would run longer if we used a linked list instead of an array in its implementation (why?). Also note that some of the al-gorithm design techniques discussed in Chapters 6 and 7 depend intimately on structuring or restructuring data specifying a problem’s instance. Many years ago, an influential textbook proclaimed the fundamental importance of both algo-rithms and data structures for computer programming by its very title: Algorithms + Data Structures = Programs [Wir76]. In the new world of object-oriented programming, data structures remain crucially important for both design and analysis of algorithms. We review basic data structures in Section 1.4.

Methods of Specifying an Algorithm

Once you have designed an algorithm, you need to specify it in some fashion. In Section 1.1, to give you an example, Euclid’s algorithm is described in words (in a free and also a step-by-step form) and in pseudocode. These are the two options that are most widely used nowadays for specifying algorithms.

Using a natural language has an obvious appeal; however, the inherent ambi-guity of any natural language makes a succinct and clear description of algorithms surprisingly difficult. Nevertheless, being able to do this is an important skill that you should strive to develop in the process of learning algorithms.

Pseudocode is a mixture of a natural language and programming language-like constructs. Pseudocode is usually more precise than natural language, and its usage often yields more succinct algorithm descriptions. Surprisingly, computer scientists have never agreed on a single form of pseudocode, leaving textbook authors with a need to design their own “dialects.” Fortunately, these dialects are so close to each other that anyone familiar with a modern programming language should be able to understand them all.

This book’s dialect was selected to cause minimal difficulty for a reader. For the sake of simplicity, we omit declarations of variables and use indentation to show the scope of such statements as for , if , and while . As you saw in the previous section, we use an arrow “ ← ” for the assignment operation and two slashes “ // ” for comments.

In the earlier days of computing, the dominant vehicle for specifying algo-rithms was a flowchart , a method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps. This representation technique has proved to be inconvenient for all but very simple algorithms; nowadays, it can be found only in old algorithm books.

The state of the art of computing has not yet reached a point where an algorithm’s description—be it in a natural language or pseudocode—can be fed into an electronic computer directly. Instead, it needs to be converted into a computer program written in a particular computer language. We can look at such a program as yet another way of specifying the algorithm, although it is preferable to consider it as the algorithm’s implementation.

Proving an Algorithm’s Correctness

Once an algorithm has been specified, you have to prove its correctness . That is, you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time. For example, the correctness of Euclid’s algorithm for computing the greatest common divisor stems from the correctness of the equality gcd (m, n) = gcd (n, m mod n) (which, in turn, needs a proof; see Problem 7 in Exercises 1.1), the simple observation that the second integer gets smaller on every iteration of the algorithm, and the fact that the algorithm stops when the second integer becomes 0.

For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A common technique for proving correctness is to use mathemati-cal induction because an algorithm’s iterations provide a natural sequence of steps needed for such proofs. It might be worth mentioning that although tracing the algorithm’s performance for a few specific inputs can be a very worthwhile activ-ity, it cannot prove the algorithm’s correctness conclusively. But in order to show that an algorithm is incorrect, you need just one instance of its input for which the algorithm fails.

The notion of correctness for approximation algorithms is less straightforward than it is for exact algorithms. For an approximation algorithm, we usually would like to be able to show that the error produced by the algorithm does not exceed a predefined limit. You can find examples of such investigations in Chapter 12.

Analyzing an Algorithm

We usually want our algorithms to possess several qualities. After correctness, by far the most important is efficiency . In fact, there are two kinds of algorithm efficiency: time efficiency , indicating how fast the algorithm runs, and space ef-ficiency , indicating how much extra memory it uses. A general framework and specific techniques for analyzing an algorithm’s efficiency appear in Chapter 2.

Another desirable characteristic of an algorithm is simplicity . Unlike effi-ciency, which can be precisely defined and investigated with mathematical rigor, simplicity, like beauty, is to a considerable degree in the eye of the beholder. For example, most people would agree that Euclid’s algorithm is simpler than the middle-school procedure for computing gcd (m, n) , but it is not clear whether Eu-clid’s algorithm is simpler than the consecutive integer checking algorithm. Still, simplicity is an important algorithm characteristic to strive for. Why? Because sim-pler algorithms are easier to understand and easier to program; consequently, the resulting programs usually contain fewer bugs. There is also the undeniable aes-thetic appeal of simplicity. Sometimes simpler algorithms are also more efficient than more complicated alternatives. Unfortunately, it is not always true, in which case a judicious compromise needs to be made.

Yet another desirable characteristic of an algorithm is generality . There are, in fact, two issues here: generality of the problem the algorithm solves and the set of inputs it accepts. On the first issue, note that it is sometimes easier to design an algorithm for a problem posed in more general terms. Consider, for example, the problem of determining whether two integers are relatively prime, i.e., whether their only common divisor is equal to 1. It is easier to design an algorithm for a more general problem of computing the greatest common divisor of two integers and, to solve the former problem, check whether the gcd is 1 or not. There are situations, however, where designing a more general algorithm is unnecessary or difficult or even impossible. For example, it is unnecessary to sort a list of n numbers to find its median, which is its n/ 2 th smallest element. To give another example, the standard formula for roots of a quadratic equation cannot be generalized to handle polynomials of arbitrary degrees.

As to the set of inputs, your main concern should be designing an algorithm that can handle a set of inputs that is natural for the problem at hand. For example, excluding integers equal to 1 as possible inputs for a greatest common divisor algorithm would be quite unnatural. On the other hand, although the standard formula for the roots of a quadratic equation holds for complex coefficients, we would normally not implement it on this level of generality unless this capability is explicitly required.

If you are not satisfied with the algorithm’s efficiency, simplicity, or generality, you must return to the drawing board and redesign the algorithm. In fact, even if your evaluation is positive, it is still worth searching for other algorithmic solutions. Recall the three different algorithms in the previous section for computing the greatest common divisor: generally, you should not expect to get the best algorithm on the first try. At the very least, you should try to fine-tune the algorithm you already have. For example, we made several improvements in our implementation of the sieve of Eratosthenes compared with its initial outline in Section 1.1. (Can you identify them?) You will do well if you keep in mind the following observation of Antoine de Saint-Exupery,´ the French writer, pilot, and aircraft designer: “A designer knows he has arrived at perfection not when there is no longer anything to add, but when there is no longer anything to take away.” 1

Coding an Algorithm

  Most algorithms are destined to be ultimately implemented as computer pro-grams. Programming an algorithm presents both a peril and an opportunity. The peril lies in the possibility of making the transition from an algorithm to a pro-gram either incorrectly or very inefficiently. Some influential computer scientists strongly believe that unless the correctness of a computer program is proven with full mathematical rigor, the program cannot be considered correct. They have developed special techniques for doing such proofs (see [Gri81]), but the power of these techniques of formal verification is limited so far to very small programs.

As a practical matter, the validity of programs is still established by testing. Testing of computer programs is an art rather than a science, but that does not mean that there is nothing in it to learn. Look up books devoted to testing and debugging; even more important, test and debug your program thoroughly whenever you implement an algorithm.

Also note that throughout the book, we assume that inputs to algorithms belong to the specified sets and hence require no verification. When implementing algorithms as programs to be used in actual applications, you should provide such verifications.

Of course, implementing an algorithm correctly is necessary but not sufficient: you would not like to diminish your algorithm’s power by an inefficient implemen-tation. Modern compilers do provide a certain safety net in this regard, especially when they are used in their code optimization mode. Still, you need to be aware of such standard tricks as computing a loop’s invariant (an expression that does not change its value) outside the loop, collecting common subexpressions, replac-ing expensive operations by cheap ones, and so on. (See [Ker99] and [Ben00] for a good discussion of code tuning and other issues related to algorithm program-ming.) Typically, such improvements can speed up a program only by a constant factor, whereas a better algorithm can make a difference in running time by orders of magnitude. But once an algorithm is selected, a 10–50% speedup may be worth an effort.

A working program provides an additional opportunity in allowing an em-pirical analysis of the underlying algorithm. Such an analysis is based on timing the program on several inputs and then analyzing the results obtained. We dis-cuss the advantages and disadvantages of this approach to analyzing algorithms in Section 2.6.

In conclusion, let us emphasize again the main lesson of the process depicted in Figure 1.2:

As a rule, a good algorithm is a result of repeated effort and rework.

Even if you have been fortunate enough to get an algorithmic idea that seems perfect, you should still try to see whether it can be improved.

Actually, this is good news since it makes the ultimate result so much more enjoyable. (Yes, I did think of naming this book The Joy of Algorithms .) On the other hand, how does one know when to stop? In the real world, more often than not a project’s schedule or the impatience of your boss will stop you. And so it should be: perfection is expensive and in fact not always called for. Designing an algorithm is an engineering-like activity that calls for compromises among competing goals under the constraints of available resources, with the designer’s time being one of the resources.

In the academic world, the question leads to an interesting but usually difficult investigation of an algorithm’s optimality . Actually, this question is not about the efficiency of an algorithm but about the complexity of the problem it solves: What is the minimum amount of effort any algorithm will need to exert to solve the problem? For some problems, the answer to this question is known. For example, any algorithm that sorts an array by comparing values of its elements needs about n log 2 n comparisons for some arrays of size n (see Section 11.2). But for many seemingly easy problems such as integer multiplication, computer scientists do not yet have a final answer.

Another important issue of algorithmic problem solving is the question of whether or not every problem can be solved by an algorithm. We are not talking here about problems that do not have a solution, such as finding real roots of a quadratic equation with a negative discriminant. For such cases, an output indicating that the problem does not have a solution is all we can and should expect from an algorithm. Nor are we talking about ambiguously stated problems. Even some unambiguous problems that must have a simple yes or no answer are “undecidable,” i.e., unsolvable by any algorithm. An important example of such a problem appears in Section 11.3. Fortunately, a vast majority of problems in practical computing can be solved by an algorithm.

Before leaving this section, let us be sure that you do not have the misconception—possibly caused by the somewhat mechanical nature of the diagram of Figure 1.2—that designing an algorithm is a dull activity. There is nothing further from the truth: inventing (or discovering?) algorithms is a very creative and rewarding process. This book is designed to convince you that this is the case.

Exercises 1.2

             Old World puzzle A peasant finds himself on a riverbank with a wolf, a goat, and a head of cabbage. He needs to transport all three to the other side of the river in his boat. However, the boat has room for only the peasant himself and one other item (either the wolf, the goat, or the cabbage). In his absence, the wolf would eat the goat, and the goat would eat the cabbage. Solve this problem for the peasant or prove it has no solution. (Note: The peasant is a vegetarian but does not like cabbage and hence can eat neither the goat nor the cabbage to help him solve the problem. And it goes without saying that the wolf is a protected species.)

            New World puzzle There are four people who want to cross a rickety bridge; they all begin on the same side. You have 17 minutes to get them all across to the other side. It is night, and they have one flashlight. A maximum of two people can cross the bridge at one time. Any party that crosses, either one or two people, must have the flashlight with them. The flashlight must be walked back and forth; it cannot be thrown, for example. Person 1 takes 1 minute to cross the bridge, person 2 takes 2 minutes, person 3 takes 5 minutes, and person 4 takes 10 minutes. A pair must walk together at the rate of the slower person’s pace. (Note: According to a rumor on the Internet, interviewers at a well-known software company located near Seattle have given this problem to interviewees.)

            Which of the following formulas can be considered an algorithm for comput-ing the area of a triangle whose side lengths are given positive numbers a , b , and c ?

algorithm problem solving example

            Write pseudocode for an algorithm for finding real roots of equation ax 2 + bx + c = 0 for arbitrary real coefficients a, b, and c. (You may assume the availability of the square root function sqrt (x). )

            Describe the standard algorithm for finding the binary representation of a positive decimal integer

                     in English.

                     in pseudocode.

            Describe the algorithm used by your favorite ATM machine in dispensing cash. (You may give your description in either English or pseudocode, which-ever you find more convenient.)

            a.  Can the problem of computing the number π be solved exactly?

                     How many instances does this problem have?

Look up an algorithm for this problem on the Internet.

                                                                    Give an example of a problem other than computing the greatest common divisor for which you know more than one algorithm. Which of them is simpler? Which is more efficient?

                                                                    Consider the following algorithm for finding the distance between the two closest elements in an array of numbers.

ALGORITHM                       MinDistance (A [0 ..n − 1] )

//Input: Array A [0 ..n − 1] of numbers

//Output: Minimum distance between two of its elements dmin ← ∞

for i ← 0 to n − 1 do

for j ← 0 to n − 1 do

if i  = j and |A[i] − A[j ]| < dmin dmin ← |A[i] − A[j ]|

return dmin

Make as many improvements as you can in this algorithmic solution to the problem. If you need to, you may change the algorithm altogether; if not, improve the implementation given.

One of the most influential books on problem solving, titled How To Solve It [Pol57], was written by the Hungarian-American mathematician George Polya´ (1887–1985). Polya´ summarized his ideas in a four-point summary. Find this summary on the Internet or, better yet, in his book, and compare it with the plan outlined in Section 1.2. What do they have in common? How are they different?

Related Topics

Privacy Policy , Terms and Conditions , DMCA Policy and Compliant

Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.

Learning

Examples of Algorithmic Thinking

by Lcom Team | Aug 23, 2022 | Blogs

Woman putting items into algorithm

Share this article!

“Effective algorithms make assumptions, show a bias toward simple solutions, trade off the costs of error against the cost of delay, and take chances.” – Brian Christian and Tom Griffiths, Algorithms to Live By (2016)

An algorithm is a process or formula for calculating answers, sorting data and automating tasks. Algorithmic thinking, then, is the construction of an algorithm, or step-by-step process for solving a problem and similar problems. This solution, by definition, should be replicable by humans or computers.

Algorithmic thinking is a derivative of computer science and the process to develop code and program applications. This approach automates the problem-solving process by creating a series of systematic, logical steps that intake a defined set of inputs and produce a defined set of outputs based on these.

In other words, algorithmic thinking is not solving for a specific answer; instead, it solves how to build a sequential, complete, and replicable process that has an end point – an algorithm. Designing an algorithm helps students to both communicate and interpret clear instructions for a predictable, reliable output. This is the crux of computational thinking.

And like computational thinking and its other elements we’ve discussed, algorithms are something we experience regularly in our lives.

Examples of Algorithmic Thinking in Everyday Life

 Example of Algorithmic Thinking as flowchart detailing steps for making pancakes using pancake mix.

If you’re an amateur chef or a frozen meal aficionado, you follow recipes and directions for preparing food, and that’s an algorithm.

When you’re feeling groovy and bust out in a dance routine – maybe the Cha Cha Slide, the Macarena, or Flossing – you are also following a routine that emulates an algorithm while simultaneously being really cool.

Outlining a process for checking out books in a school library or instructions for cleaning up at the end of the day are examples of algorithmic thinking and letting your inner computer scientist shine in everyday life. 

Examples of Algorithms in Curriculum

Beginning to develop students’ algorithmic prowess, however, does not require formal practice with coding or even access to technology. Have students map directions for a peer to navigate a maze, create visual flowcharts for tasks, or develop a coded language.

To get started, here are ideas for incorporating algorithmic thinking in curriculum for multiple different subjects.

  • English Language Arts: Students map a flow chart detailing steps for determining whether to use a colon or dash in a sentence.
  • Mathematics: In a word problem, students develop a step-by-step process for how they answered a question that can then be applied to similar problems.
  • Science: Students articulate how to classify elements in the periodic table.
  • Social Studies: Students describe a sequence of smaller events in history that precipitated a much larger event.
  • Languages: Students apply new vocabulary and practice speaking skills to direct another student to perform a task, whether it’s ordering coffee at a café or navigating from one point in a classroom to another.
  • Arts: Students create instructions for drawing a picture that another student then has to use to recreate the image.

Examples of Algorithms in Computer Science

These are obviously more elementary examples; algorithms – especially those used in coding – are often far more intricate and complex. To contextualize algorithms in computer science and programming , below are two examples.

Standardized Testing and Algorithms

Coding enables the adaptive technology often leveraged in classrooms today.

For example, the shift to computer-based standardized tests has led to the advent of adaptive assessments that pick questions based on student ability as determined by correct and incorrect answers given.

If students select the correct answer to a question, then the next question is moderately more difficult. But if they answer wrong, then the assessment offers a moderately easier question. This occurs through an iterative algorithm that starts with a pool of questions. After an answer, the pool is adjusted accordingly. This repeats continuously.

The Omnipotent Google and Algorithms

Google’s search results are determined (in part) by the PageRank algorithm, which assigns a webpage’s importance based on the number of sites linking to it. So, if we google “What are examples of algorithmic thinking?”’ we can bet that the chosen pages have some of the most links to them for the topic “What are examples of algorithmic thinking?” It’s still more complicated than this, of course; if you are interested, this article goes into the intricacies of the PageRank algorithm.

There are over 1.5 billion websites with billions more pages to count, but thanks to algorithmic thinking we can type just about anything into Google and expect to be delivered a curated list of resources in under a second. This right here is the power of algorithmic thinking.

“The Google algorithm was a significant development. I’ve had thank-you emails from people whose lives have been saved by information on a medical website or who have found the love of their life on a dating website.” Tim Berners-Lee

In whatever way it’s approached in the classroom, algorithmic thinking encourages students to communicate clearly and logically. Students learn to persevere throughout its multiple iterations, challenges, and solutions.

Learning.com Staff Writers

Learning.com Team

Staff Writers

Founded in 1999, Learning.com provides educators with solutions to prepare their students with critical digital skills. Our web-based curriculum for grades K-12 engages students as they learn keyboarding, online safety, applied productivity tools, computational thinking, coding and more.

Further Reading

Navigating a Successful Digital Curriculum Implementation

  • Navigating a Successful Digital Curriculum Implementation

by Kamala Wymore | Aug 15, 2024

Implementing a digital curriculum might seem overwhelming, but with the right strategy and teamwork, you can ensure a smooth and successful rollout....

Magic Behind Teaching and Learning

  • Magic Behind Teaching and Learning

by Kelli Erwin | Aug 13, 2024

The teaching and learning process can be magical for educators and students alike. However, creating an effective learning environment requires...

How Were the TEKS Created & What is Its Significance?

  • How Were the TEKS Created & What is Its Significance?

by Lcom Team | Jul 31, 2024

The Texas Essential Knowledge and Skills (TEKS) framework details the educational standards in the state of Texas, providing a comprehensive outline...

Quick Links

  • Request More Info

Recent news & Articles

  • Algorithmic Thinking: A Critical Skill for Today’s Students
  • CIPA Compliance Made Easy with Learning.com

What is a Greedy Algorithm? Examples of Greedy Algorithms

Tantoluwa Heritage Alabi

According to the Oxford English Dictionary, "greedy" means having excessive desire for something without considering the effect or damage done.

In computer science, a greedy algorithm is an algorithm that finds a solution to problems in the shortest time possible. It picks the path that seems optimal at the moment without regard for the overall optimization of the solution that would be formed.

Edsger Dijkstra, a computer scientist and mathematician who wanted to calculate a minimum spanning tree, introduced the term "Greedy algorithm". Prim and Kruskal came up with optimization techniques for minimizing cost of graphs.

Many Greedy algorithms were developed to solve graph problems. A graph is a structure made up of edges and vertices.

simple-graph-diagram

Greedy vs Not Greedy Algorithms

An algorithm is greedy when the path picked is regarded as the best option based on a specific criterion without considering future consequences. But it typically evaluates feasibility before making a final decision. The correctness of the solution depends on the problem and criteria used.

Example: A graph has various weights and you are to determine the maximum value in the tree. You'd start by searching each node and checking its weight to see if it is the largest value.

There are two approaches to solving this problem: greedy approach or not greedy.

example-graph

This graph consists of different weights and we need to find the maximum value. We'll apply the two approaches on the graph to get the solution.

Greedy Approach

In the images below, a graph has different numbers in its vertices and the algorithm is meant to select the vertex with the largest number.

Starting from vertex 6, then it's faced with two decisions – which is bigger, 3 or 4? The algorithm picks 4, and then is faced with another decision – which is bigger, 14 or 11. It selects 14, and the algorithm ends.

On the other hand there is a vertex labeled 20 but it is attached to vertex 3 which greedy does not consider as the best choice. It is important to select appropriate criteria for making each immediate decision.

greedy-approach-graph

Not Greedy Approach

The “not greedy” approach checks all options before arriving at a final solution, unlike the "greedy approach" which stops once it gets its results.

Starting from vertex 6, then it's faced with two decisions – which is bigger, 3 or 4? The algorithm picks 4, and then is faced with another decision – which is bigger, 14 or 11. It selects 14 and keeps it aside.

Then it runs the process again, starting from vertex 6. It selects the vertex with 3 and checks it. 20 is attached to the vertex 3 and the process stops. Now it compares the two results – 20 and 14. 20 is bigger, so it selects the vertex (3) that carries the largest number and the process ends.

This approach considers many possibilities in finding the better solution.

non-greedy-approach-graph

Characteristics of a Greedy Algorithm

  • The algorithm solves its problem by finding an optimal solution. This solution can be a maximum or minimum value. It makes choices based on the best option available.
  • The algorithm is fast and efficient with time complexity of O(n log n) or O(n). Therefore applied in solving large-scale problems.
  • The search for optimal solution is done without repetition – the algorithm runs once.
  • It is straightforward and easy to implement.

How to Use Greedy Algorithms

Before applying a greedy algorithm to a problem, you need to ask two questions:

  • Do you need the best option at the moment from the problem?
  • Do you need an optimal solution (either minimum or maximum value)?

If your answer to these questions is "Yes", then a greedy algorithm is a good choice to solve your problem.

Let’s assume you have a problem with a set of numbers and you need to find the minimum value.

You start of by defining the constraint, which in this case is finding the minimum value. Then each number will be scanned and checked on each constraint which serves as a condition to be fulfilled. If the condition is true, the number(s) is selected and returned as the final solution.

Here's a flowchart representation of this process:

greedy-algorithm-flow-chart

Greedy Algorithm Examples

Problem 1 : activity selection problem.

This problem contains a set of activities or tasks that need to be completed. Each one has a start and finish time. The algorithm finds the maximum number of activities that can be done in a given time without them overlapping.

Approach to the Problem

  • We have a list of activities. Each has a start time and finish time.
  • First, we sort the activities and start time in ascending order using the finish time of each.
  • Then we start by picking the first activity. We create a new list to store the selected activity.
  • To choose the next activity, we compare the finish time of the last activity to the start time of the next activity. If the start time of the next activity is greater than the finish time of the last activity, it can be selected. If not we skip this and check the next one.
  • This process is repeated until all activities are checked. The final solution is a list containing the activities that can be done.

The table below shows a list of activities as well as starting and finish times.

start-and-finish-times-1

The first step is to sort the finish time in ascending order and arrange the activities with respect to the result.

start-and-finish-times-2

After sorting the activities, we select the first activity and store it in the selected activity list. In our example, the first activity is “Homework”.

Moving to the next activity, we check the finish time of “Homework” (5)  which was the last activity selected and the starting time of “Term paper” (4). To pick an activity, the starting time of the next activity must be greater than or equal to the finish time. (4) is less than (5), so we skip the activity and move to the next.

The next activity “Presentation” has a starting time of (6) and it is greater than the finish time (5) of “Homework”. So we select it and add it to our list of selected activities.

For the next activity, we do the same checking. Finish time of “Presentation” is (10), starting time of “Volleyball practice ” is (10). We see that the starting time is equal to the finish time which satisfies one of the conditions, so we select it and add it to our list of selected activities.

Continuing to the next activity, the finish time of “Volleyball” practice is (12) and the starting time of “Biology lecture” is (13). We see the starting time is greater than the finish time so we select it.

For our last activity, the starting time for “Hangout” is (7) and the finishing time of our last activity “Biology lecture” is (14), 7 is less than 14, so we cannot select the activity. Since we are at the end of our activity list, the process ends.

Our final result is the list of selected activities that we can do without time overlapping: {Homework, Presentation, Volleyball practice, Biology lecture}.

Code Implementation of the Example

The variable <data> stores the starting times of each activity, the finish time of each activity, and the list of tasks (or activities) to be performed.

The variable <selected_activity> is an empty list that will store the selected activities that can be performed.

<start_position> shows the position the first activity which is index “0”. This will be our starting point.

Here's a dataframe table showing the original data:

Then we sort the finish time in ascending order and rearrange the start time and activity in respect to it. We target the variables by using the keys in the dictionary.

In the code above, we intialized <tem> to zero. We are not using the inbuilt method to sort the finish time. We are using two loops to arrange it in ascending order.   <i> and <j> represent the indexes and check if values of the <data['finish_time'][i]>  is less than <data['finish_time'][j]> .

If the condition is true, <tem> stores the values of the elements in the <i> position and swaps the corresponding element.

Now we print the final result, here's what we get:

And here's a dataframe table showing the sorted data:

After sorting the activities, we start by selecting the first activity, which is “Homework”. It has a starting index of “0” so we use the <start_position> to target the activity and append it to the empty list.

The condition for selecting an activity is that the start time of the next activity selected is greater than the finish time of the previous activity. If the condition is true, the selected activity is added to the <selected_activity> list.

Here's what it looks like all together:

Problem 2: Fractional Knapsack Problem

A knapsack has a maximum weight, and it can only accommodate a certain set of items. These items have a weight and a value.

The aim is to fill the knapsack with the items that have the highest total values and do not exceed the maximum weight capacity.

There are two elements to consider: the knapsack and the items. The knapsack has a maximum weight and carries some items with a high value.

Scenario: In a jewelry store, there are items made of gold, silver, and wood. The costliest item is gold followed by silver and then wood. If a jewerly thief comes to the store, they take gold because they will make the most profit.

The thief has a bag (knapsack) that they can put those items in. But there is a limit to what the thief can carry because these items can get heavy. The idea is to pick the item the makes the highest profit and fits in the bag (knapsack) without exceeding its maximum weight.

  • The first step is to find the value to weight ratio of all items to know what fraction each one occupies.
  • We then sort these ratios in descending order (from highest to lowest ). This way we can pick the ratios with the highest number first knowing that we will make a profit.
  • When we pick the highest ratio, we find the corresponding weight and add it to the knapsack. There are conditions to be checked.

Condition 1 : If the item added has a lesser weight than the maximum weight of the knapsack, more items are added until the sum of all the items in the bag are the same as the maximum weight of the knapsack.

Condition 2 : If the sum of the items’ weight in the bag is more than the maximum capacity of the knapsack, we find the fraction of the last item added. To find the fraction, we do the following:

  • We find the sum of the remaining weights of the items in the knapsack. They must be less than the maximum capacity.
  • We find the difference between the maximum capacity of the knapsack and the sum of the remaining weights of the items and divide by the weight of the last item to be added.

To add the weight of the last item to the knapsack, we multiply the fraction by the weight.

When we sum up the weights of all the items it will be equal to the knapsack's maximum weight.

Practical Example:

Let's say that the maximum capacity of the knapsack is 17, and there are three items available. The first item is gold, the second item is silver, and third item is wood.

  • Weight of gold is 10, the weight of silver is 6, and the weight of wood is 2
  • the value (profit) of gold is 40, the value (profit) of silver is 30, and the value (profit) of wood is 6.
  • Ratio of gold= value/weight = 40/10 = 4
  • Ratio of silver = value/weight=30/6 = 5
  • Ratio of wood = value/weight = 6/2 = 3
  • Arranging the ratios in descending: 5, 4, 3.
  • The largest ratio is 5 and we match it to the corresponding weight “6”. It points to silver.
  • We put silver in the knapsack first and compare it to the maximum weight which is 17. 6 is less than 17 so we have to add another item. Going back to the ratios, the second largest is “4” and it corresponds to the weight of “10” which points to gold.
  • Now, we put gold in the knapsack, add the weight of the silver and gold, and compare it with the knapsack weight. (6 + 10 = 16). Checking it against the maximum weight, we see that it is less. So we can take another item. We go back to the list of ratios and take the 3rd largest which is “3” and it corresponds to “2” which points to wood.
  • When we add wood in the knapsack, the total weight is (6 +10+2 = 18) but that is greater than our maximum weight which is 17. We take out the wood from the knapsack and we are left with gold and silver. The total sum of the two is 16 and the maximum capacity is 17. So we need a weight of 1 to make it equal. Now we apply condition 2 discussed above to find the fraction of wood to fit in the knapsack.

frac1

Now the knapsack is filled.

The variable <data> stores the weights of each item and the profits. The variable <maximum_capacity> stores the maximum weight of the knapsack. <selected_wt> is intialized at 0, and it will store the selected weights to be put in the knapsack. Lastly, <max_profit> is intialized as 0, it will store the values of the selected weight.

Then we calculate the ratio of the profit to the weight. Ratio = profit/weight:

Now that we have the ratio, we arrange the elements in descending order, from biggest to smallest. Then the items in weight and profit are arranged according to the positions of the sorted ratio.

After the weight and profit are sorted, we start choosing the items and checking the condition. We loop through the length of ratio to target the index of each item in the list. Note: all items in ratio are arranged from biggest to smallest, so the first item is the maximum value and the last item is the minimum value.

The first item we select has the highest ratio amongst the rest and it is in index 0. Now that the first weight is selected weight, we check if it is less than maximum weight. If it is, we add items till the total weight is equal to the weight of the knapsack. The second item we select has the second highest ratio amongst the rest and it is in index 1, the arrangement is that order of selection.

For every selected weight, we add it to the selected_wt  variable and their corresponding profits to the max_profit variable.

When the sum of the selected weights in the knapsack exceeds the maximum weight, we find the fraction of the weight of the last item added to make the total selected weights be equal to the maximum weight. We do this by finding the difference between the max_weight and the sum of the selected weights divided by the weight of the last item added.

The final profit made from the fraction carried is added to the max_profit variable. Then we return the max_profit as the final result.

Bringing it all together:

Applications of Greedy Algorithms

There are various applications of greedy algorithms. Some of them are:

  • Minimum spanning tree is without any cycles and with the minimum possible total edge weight. This tree is derived from a connected undirected graph with weights.
  • Dijkstra’s shortest path is a search algorithm that finds the shortest path between a vertex and other vertices in a weighted graph.
  • Travelling salesman problem involves finding the shortest route that visits different places only once and returns to the starting point
  • Huffman coding assigns shorter code to frequently occurring symbols and longer code to less occurring symbols. It is used to encode data efficiently.

Advantages of Using a Greedy Algorithm

Greedy algorithms are quite straight forward to implement and easy to understand. They are also very efficient and have a lower complexity time of  O(N * logN).

They're useful in solving optimization problems, returning a maximum or minimum value.

Disadvantages/Limitations of Using a Greedy Algorithm

Even though greedy algorithms are straightforward and helpful in optimization problems, they don't offer the best solutions at all times.

Also greedy algos only run once, so they don't check the correctness of the result produced.

Greedy algorithms are a straightforward approach to solving optimization problems, returning a minimum or maximum value.

This article explained some examples of greedy algorithms and the approach to tackling each problem. By understanding how a greedy algorithm problems works you can better understand dynamic programming. If you have any questions feel free to reach out to me on Twitter 💙.

Read more posts .

If you read this far, thank the author to show them you care. Say Thanks

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Solve Me First Easy Problem Solving (Basic) Max Score: 1 Success Rate: 97.74%

Simple array sum easy problem solving (basic) max score: 10 success rate: 94.54%, compare the triplets easy problem solving (basic) max score: 10 success rate: 95.84%, a very big sum easy problem solving (basic) max score: 10 success rate: 98.82%, diagonal difference easy problem solving (basic) max score: 10 success rate: 96.03%, plus minus easy problem solving (basic) max score: 10 success rate: 98.39%, staircase easy problem solving (basic) max score: 10 success rate: 98.37%, mini-max sum easy problem solving (basic) max score: 10 success rate: 94.46%, birthday cake candles easy problem solving (basic) max score: 10 success rate: 97.15%, time conversion easy problem solving (basic) max score: 15 success rate: 92.38%, cookie support is required to access hackerrank.

Seems like cookies are disabled on this browser, please enable them to open this website

Sphero makes remarkably cool, programmable robots and STEAM-based educational tools that transform the way kids learn, create and invent through coding, science, music, and the arts.

  • Teacher Resources
  • Funding & Grants
  • Sphero Global Challenge
  • Shop by Product
  • Blueprint Engineering
  • Professional Development
  • Shop by Age
  • Early Learning
  • Middle School
  • High School
  • Sphero Central
  • Lessons & Resources
  • App Downloads
  • Standards Alignment
  • ESSER & American Rescue Plan
  • Federal & State
  • Corporate & Organizational
  • Crowdfunding
  • Sample RFP/Grant Language
  • Schedule a Virtual Demo
  • Age Brackets
  • Early Elementary
  • Upper Elementary
  • Manage Your Team

Sphero logo

Enter your e-mail and password:

New customer? Create your account

Lost password? Recover password

Recover password

Enter your email:

Remembered your password? Back to login

Create my account

Please fill in the information below:

Already have an account? Login here

6 Examples of Algorithms In Everyday Life

Algorithms are all around us in the real world, like traffic lights, and how they work.

We teach our kids from a very early age how to do everyday tasks. We show them how to tie their shoes, walk up the stairs, snap their fingers, whistle, and even how to set the table using a step-by-step procedure. 

From following a recipe to driving a car, everything we do has a process to follow. As kids grow and face more complex problems to solve, they need a strong foundation in algorithmic design and computational thinking to accomplish their goals.

Algorithms and computational thinking are so important for young learners because they teach kids how to solve problems and develop step-by-step processes, both at school and outside the classroom. 

We asked two STEM education experts, Laurie Guyon and David Czechowski, how they best explain real-world algorithm examples to their students and compiled them here. 

What is an algorithm?

An algorithm is a set of rules or instructions used to solve complex problems. In many STEM fields, algorithms are used by computer programs to streamline processes. However, algorithms aren’t limited to STEM; they’re found everywhere.

Why Algorithms are Important in Early Education

Algorithms are in everything we do. They’re a crucial part of computational thinking and problem-solving in many areas of life, as we use algorithms to accurately and efficiently execute tasks. 

Laurie Guyon, Sphero Hero and Coordinator for Model Schools in New York, shared, “As a former sixth-grade English teacher, I used algorithmic design in essay writing to talk about sequencing, storyboarding, plot diagrams, and following steps to write a quality essay. We even called the editing process ‘debugging,’ because I found students were more likely to edit their papers if they saw the purpose of making sure their writing made sense to the reader. If we discussed their sentences as lines of code that a human had to read, they understood the importance of the editing process.”

The same can be said for just about anything we do. If you have ever watched a cooking show, the chef walks you through the step-by-step process of creating a delicious meal. Someone who investigates crime has a procedure to discern what happened in an incident and who could be at fault. Algorithms are in everything we do, so it’s important that young learners understand how to recognize and utilize them, Guyon adds.

6 Examples of Real-World Algorithms

Whether algorithms are used in places that aren’t at all surprising, like Google, or in a manual activity that is more unexpected, like brushing your teeth, algorithms play a role in the human experience every single day, Guyon goes on to explain.

1. Sorting Papers

A teacher sorting papers in alphabetical order is an example of a real-world algorithm.

Imagine a teacher sorting their students’ papers according to the alphabetical order of their first names. This type of task is similar to the function of a sorting algorithm, like a bucket sort . By looking at only the first letter of the first name, you can remove a lot of unnecessary information. This is an automated process that makes sorting more efficient.

2. Facial Recognition

Every day we see someone we know: a loved one, a coworker, or even an eccentric neighbor. When we recognize somebody’s face, we’re drawing upon data we’ve previously collected on the size and position of that person’s facial features. That information is then analyzed internally to automatically recognize others.

Facial recognition, both through people we know and through technology, is an example of a real-world algorithm.

Algorithms can automate this process for computers; however, facial recognition is not perfect. In the Netflix series “ Coded Bias ,” Joy Buolamwini, an activist and computer scientist based at MIT, discusses how algorithms used for facial recognition can be biased. This investigative series finds that facial recognition algorithms often do not recognize dark-skinned faces accurately, uncovering the need for additional work when creating algorithms based on human design.

3. Google Search

Even an action as seemingly simple as a Google search is only possible with the help of algorithms. Say, for example, you want to know if an elephant can swim. How you phrase the question to Google is the input you are asking the computer to determine. 

Google search is a good example of a real-world algorithm.

Google doesn’t even need all the words of the question “can an elephant swim?” For example, try searching for “swimming elephant” and see what you get. You will find that immediately, the output or the results show videos of elephants swimming, followed by more on the subject. Google uses an algorithm to generate these answers without needing the entirety of the question.

4. Duplicating Outcomes  

A teenage boy references a cookbook in the kitchen. Following a recipe is a good example of real-world algorithms in action.

David Czechowski, Computer Science and Technology Teacher at Hyde Park Central Schools , explains this example. “If we want to do well at a given task, it can be extremely helpful to look at previous successful examples from other people. A great daily example of this is using a recipe while cooking. Sure, you might be able to figure out how to make delicious pasta on your own through trial and error, but following a step-by-step recipe from a well-known chef helps ensure success.”

5. Traffic Lights

Czechowski adds, “Here’s an algorithm we frequently experience; the next time you're in your car stuck at a red light, consider the algorithm the traffic light is executing.”

Traffic lights are a great example of how algorithms are used in the real world, all around us.

Most traffic lights don’t automatically cycle through green, yellow, and red. Rather, there are sensory inputs that determine the signals’ timing based on the flow of traffic. The algorithm is a well-constructed, step-by-step order that directs the traffic appropriately (although it may not feel like it when you’re sitting at a red light). 

6. Bus Schedules

Every weekday morning, thousands of buses criss-cross neighborhoods picking up students. Mapping out efficient bus routes is an overwhelming manual task to execute without an algorithm to automate the calculations and schedule the right students for the right addresses at the right time. This routing issue is classically referred to as “ The Traveling Salesman Problem " and is even used as an exercise for theoretical computer science, according to Czechowski.

A woman waits for a bus. The schedule busses follow to make on-time arrivals and departures is an example of a real-world algorithm.

Algorithms are all around us, so close and common that we don’t even recognize them as algorithms. From cooking to looking up directions to something simple like tying your shoes, finding algorithms in your day-to-day life may not be as hard as you think.

How Administrators Can Build Confidence with Algorithms

Algorithmic thinking should be a crucial facet in modern education programs, but many schools overlook this topic or are unsure how to approach it. Administrators can take steps to ensure that algorithmic thinking and design are present in their schools.

Administrator’s Corner: Algorithmic thinking can also improve the onboarding or upskilling of teachers. 

When designing professional learning opportunities for current or future teachers, I like to follow an outline or an algorithmic process. I start with a question or activity to get participants engaged, build community, and think about the topic. From there, I like to do something to get them excited about the subject. We dive into the meat of the matter, with lots of time for questions and reflection. This algorithm allows participants to be actively engaged in the learning process and keeps a natural flow for success.

Show Real-World Examples of Algorithms with Sphero

Algorithm design doesn’t have to be complex or daunting. It already plays an important role in our daily lives, even in the simplest tasks we do like Googling questions or organizing papers. Algorithmic thinking is an integral part of computational thinking and is now a necessary life skill.

Young learners can develop algorithmic thinking and design skills both in and out of the classroom with the help of Sphero’s programmable robots and STEM kits .

algorithm problem solving example

Written by:

Sphero Team

Keep your inbox inspired. Sign up for product updates, exclusive offers, and teacher resources.

Popular posts

Sphero Heroes posing for picture together.

Featured products

Sphero BOLT Power Pack + Sphero City and Golf Code Mat.

Download Interview guide PDF

Algorithm interview questions, download pdf, what is an algorithm.

Algorithms and Data Structures are a crucial component of any technical coding interview . It does not matter if you are a C++ programmer, a Java programmer, or a Web developer using JavaScript, Angular, React, JQuery, or any other programming language. 

algorithm problem solving example

A programmer should have a thorough understanding of both basic data structures, such as arrays, linked lists, trees, hash tables, stacks, and queues, etc. and conventional algorithms, such as Binary Search, Dynamic Programming, and so on. Therefore, in this article we would be mostly focussing on Algorithms - an introduction to algorithms, a lot of algorithms interview questions which are being asked in the coding interviews of various companies and also a few Algorithms MCQs which we think everyone should practice in order to have a better understanding of algorithms.

Now, the very first question which must be popping in your head must be "What is an algorithm?" Well, the answer to this question is: An algorithm is a finite sequence of well-defined instructions used to solve a class of problems or conduct a computation in mathematics and computer science. 

Algorithms are used to specify how calculations, data processing, automated reasoning, automated decision making, and other tasks should be done. An algorithm is a method for calculating a function that can be represented in a finite amount of space and time and in a well defined formal language. The instructions describe a computation that, when run, continues through a finite number of well defined subsequent stages, finally creating "output" and terminating at a final ending state, starting from an initial state and initial input (possibly empty). The shift from one state to the next is not always predictable; some algorithms, known as randomised algorithms, take random input into account.

The Need For Algorithms (Advantages of Algorithms):

Before diving deep into algorithm interview questions, let us first understand the need for Algorithms in today's world. The following are some of the benefits of using algorithms in real-world problems.

  • Algorithms boost the effectiveness of an existing method.
  • It is easy to compare an algorithm's performance to those of other approaches using various methods (Time Complexity, Space Complexity, etc.).
  • Algorithms provide the designers with a detailed description of the criteria and goals of the problems.
  • They also enable a reasonable comprehension of the program's flow.
  • Algorithms evaluate how well the approaches work in various scenarios (Best cases, worst cases, average cases).
  • An algorithm also determines which resources (input/output, memory) cycles are necessary.
  • We can quantify and assess the problem's complexity in terms of time and space using an algorithm.
  • The cost of design is also reduced if proper algorithms are used.

algorithm problem solving example

Algorithm Interview Questions for Freshers

1. how can we compare between two algorithms written for the same problem.

The complexity of an algorithm is a technique that is used to categorise how efficient it is in comparison to other algorithms. It focuses on how the size of the data set to be processed affects execution time. In computing, the algorithm's computational complexity is critical. It is a good idea to categorise algorithms according to how much time or space they take up and to describe how much time or space they take up as a function of input size.

  • Complexity of Time: The running time of a program as a function of the size of the input is known as time complexity.
  • Complexity of Space: Space complexity examines algorithms based on how much space they require to fulfil their tasks. In the early days of computers, space complexity analysis was crucial (when storage space on the computer was limited).
Note: Nowadays, a lack of space is rarely an issue because computer storage is plentiful. Therefore, it is mostly the Time Complexity that is given more importance while evaluating an Algorithm.

2. What do you understand about the DFS (Depth First Search) algorithm.

Depth First Search or DFS is a technique for traversing or exploring data structures such as trees and graphs. The algorithm starts at the root node (in the case of a graph, any random node can be used as the root node) and examines each branch as far as feasible before retracing. So the basic idea is to start at the root or any arbitrary node and mark it, then advance to the next unmarked node and repeat until there are no more unmarked nodes. After that, go back and check for any more unmarked nodes to cross. Finally, print the path's nodes. The DFS algorithm is given below:

  • Step1: Create a recursive function that takes the node's index and a visited array as input.
  • Step 2: Make the current node a visited node and print it.
  • Step 3: Call the recursive function with the index of the adjacent node after traversing all nearby and unmarked nodes.

3. What do you understand about the BFS (Breadth First Search) algorithm.

BFS or Breadth-First Search is a graph traversal technique. It begins by traversing the graph from the root node and explores all of the nodes in the immediate vicinity. It chooses the closest node and then visits all of the nodes that have yet to be visited. Until it reaches the objective node, the algorithm repeats the same method for each of the closest nodes. 

The BFS Algorithm is given below:

  • Step 1: Set status = 1 as the first step for all the nodes(ready state).
  • Step 2: Set the status of the initial node A to 2, that is, waiting state.
  • Step 3: Repeat steps 4 and 5 until the queue is not empty.
  • Step 4: Dequeue and process node N from the queue, setting its status to 3, that is, the processed state.
  • Step 5: Put all of N's neighbours in the ready state (status = 1) in the queue and set their status to 2 (waiting state)
  • Step 6: Exit.

4. Write down a string reversal algorithm. If the given string is "kitiR," for example, the output should be "Ritik".

An algorithm for string reversal is as follows:

  • Step 1: Start.
  • Step 2: We take two variables l and r.
  • Step 3: We set the values of l as 0 and r as (length of the string  - 1).
  • Step 4: We interchange the values of the characters at positions l and r in the string.
  • Step 5: We increment the value of l by one.
  • Step 6: We decrement the value of r by one.
  • Step 7: If the value of r is greater than the value of l, we go to step 4
  • Step 8: Stop.

5. What do you understand about the Dynamic Programming (DP) Algorithmic Paradigm? List a few problems which can be solved using the same.

Dynamic Programming is primarily a recursion optimization. We can use Dynamic Programming to optimise any recursive solution that involves repeated calls for the same inputs. The goal is to simply save the results of subproblems so that we do not have to recalculate them later. The time complexity of this simple optimization is reduced from exponential to polynomial. For example, if we create a simple recursive solution for Fibonacci Numbers, the time complexity is exponential, but if we optimise it by storing subproblem answers using Dynamic Programming, the time complexity is linear. 

algorithm problem solving example

The following codes illustrate the same:

With Recursion (no DP): The time complexity of the given code will be exponential.

With DP: The time complexity of the given code will be linear because of Dynamic Programming.

A few problems which can be solved using the Dynamic Programming (DP) Algorithmic Paradigm are as follows:

  • Finding the nth Fibonacci number
  • Finding the Longest Common Subsequence between two strings.
  • Finding the Longest Palindromic Substring in a string.
  • The discrete (or 0-1) Knapsack Problem.
  • Shortest Path between any two nodes in a graph (Floyd Warshall Algorithm)
  • Software Dev
  • Data Science

6. Write an algorithm for counting the number of leaf nodes in a binary tree.

An algorithm for counting the number of leaf nodes in a binary tree is given below:

  • Step 1: If the current node is null, return a value 0.
  • Step 2: If a leaf node is encountered, that is, if the current node's left and right nodes are both null, then return 1.
  • Step 3: Calculate the number of leaf nodes recursively by adding the number of leaf nodes in the left subtree by the number of leaf nodes in the right subtree.

7. Write down an algorithm for adding a node to a linked list sorted in ascending order(maintaining the sorting property).

An algorithm for adding a node to a link list sorted in ascending order (maintaining the sorting property) is given below:

  • Step 1: Check if the linked list has no value (or is empty). If yes, then set the new node as the head and return it.
  • Step 2: Check if the value of the node to be inserted is smaller than the value of the head node. If yes, place it at the beginning and make it the head node.
  • Step 3: Find the suitable node after which the input node should be added in a loop. To discover the required node, begin at the head and work your way forward until you reach a node whose value exceeds the input node. The preceding node is the correct node.
  • Step 4: After the correct node is found in step 3, insert the node.

8. Describe the Binary Search Algorithm.

To apply binary search on a list of elements, the prerequisite is that the list of elements should be sorted. It is based on the Divide and Conquers Algorithmic paradigm. In the Binary Search Algorithm, we divide the search interval in half periodically to search the sorted list. We begin by creating an interval that spans the entire list. If the search key's value is less than the item in the interval's midpoint, the interval should be narrowed to the lower half. Otherwise, we limit it to the upper half of the page. We check for the value until it is discovered or the interval is empty. Given below is an algorithm describing Binary Search: (Let us assume that the element to be searched is x and the array of elements is sorted in ascending order)

  • Step 1: x should be firstly compared to the middle element.
  • Step 2: We return the middle element's index if x matches the middle element.
  • Step 3: Else If x is greater than the middle element, x can only be found after the middle element in the right half subarray since the array is sorted in the ascending order. As a result, we repeat the process for the right half.
  • Step 4: Otherwise, we repeat for the left half (x is smaller).
  • Step 5: If the interval is empty, we terminate the binary search.

The time complexity of the Binary Search Algorithm is O(log(n)) where n is the size of the list of elements and its space complexity is constant, that is, O(1). 

9. Describe the Linear Search Algorithm.

To find an element in a group of elements, the linear search can be used. It works by traversing the list of elements from the beginning to the end and inspecting the properties of all the elements encountered along the way. Let us consider the case of an array containing some integer elements. We want to find out and print all of the elements' positions that match a particular value (also known as the "key" for the linear search). The linear search works in a flow here, matching each element with the number from the beginning to the end of the list, and then printing the element's location if the element at that position is equal to the key. 

Given below is an algorithm describing Linear Search:

  • Step 1: Using a loop, traverse the list of elements given.
  • Step 2: In each iteration, compare the target value (or key-value) to the list's current value.
  • Step 3: If the values match, print the array's current index.
  • Step 4: Move on to the next array element if the values do not match.
  • Step 5: Repeat Steps 1 to 4 till the end of the list of elements is reached.

algorithm problem solving example

The time complexity of the Linear Search Algorithm is O(n) where n is the size of the list of elements and its space complexity is constant, that is, O(1). 

10. What do you understand by a searching algorithm? List a few types of searching algorithms.

Searching Algorithms are used to look for an element or get it from a data structure (usually a list of elements). These algorithms are divided into two categories based on the type of search operation:

  • Sequential Search: This method traverses the list of elements consecutively, checking each element and reporting if the element to be searched is found. Linear Search is an example of a Sequential Search Algorithm.
  • Interval Search: These algorithms were created specifically for searching sorted data structures. Because they continually target the centre of the search structure and divide the search space in half, these types of search algorithms are far more efficient than Sequential Search algorithms. Binary Search is an example of an Interval Search Algorithm.

11. What do you understand about greedy algorithms? List a few examples of greedy algorithms.

A greedy algorithm is an algorithmic method that aims to choose the best optimal decision at each sub-step, eventually leading to a globally optimal solution. This means that the algorithm chooses the best answer available at the time, regardless of the consequences. In other words, when looking for an answer, an algorithm always selects the best immediate, or local, option. Greedy algorithms may identify less than perfect answers for some cases of other problems while finding the overall, ideal solution for some idealistic problems.

The Greedy algorithm is used in the following algorithms to find their solutions:

  • Prim's Minimal Spanning Tree Algorithm
  • Kruskal's Minimal Spanning Tree Algorithm
  • Travelling Salesman Problem
  • Fractional Knapsack Problem
  • Dijkstra's Algorithm
  • Job Scheduling Problem
  • Graph  Map Coloring
  • Graph  Vertex Cover.

12. Explain the Divide and Conquer Algorithmic Paradigm. Also list a few algorithms which use this paradigm.

Divide and Conquer is an algorithm paradigm, not an algorithm itself. It is set up in such a way that it can handle a large amount of data, split it down into smaller chunks, and determine the solution to the problem for each of the smaller chunks. It combines all of the piecewise solutions of the smaller chunks to form a single global solution. This is known as the divide and conquer technique. The Divide and Conquer algorithmic paradigm employ the steps given below:

  • Divide: The algorithm separates the original problem into a set of subproblems in this step.
  • Conquer: The algorithm solves each subproblem individually in this step.
  • Combine: In this step, the algorithm combines the solutions to the subproblems to obtain the overall solution.

algorithm problem solving example

Some of the algorithms which use the Divide and Conquer Algorithmic paradigm are as follows:

  • Binary Search
  • Strassen's Matrix Multiplication
  • Closest pair of points.

13. Write an algorithm to swap two given numbers in Java without using a temporary variable.

It is a trick question that is frequently asked in the interviews of various companies. This problem can be solved in a variety of ways. However, while solving the problem, we must solve it without using a temporary variable, which is an essential condition. For this problem, if we can consider the possibility of integer overflow in our solution while coming up with an approach to solving it, we can make a great impression on interviewers.

Let us say that we have two integers a and b, with a's value equal to 5 and b's value equal to 6, and we want to swap them without needing a third variable. We will need to use Java programming constructs to solve this problem. Mathematical procedures such as addition, subtraction, multiplication, and division can be used to swap numbers. However, it is possible that it will cause an integer overflow problem. 

Let us take a look at two approaches to solve this problem:

Using Addition and subtraction:

It is a clever trick. However, if the addition exceeds the maximum value of the int primitive type as defined by Integer.MAX_VALUE in Java, or if the subtraction is less than the minimum value of the int primitive type as defined by Integer.MIN_VALUE in Java, there will be an integer overflow.

Using the XOR method:

Another way to swap two integers without needing a third variable (temporary variable) is using the XOR method. This is often regarded as the best approach because it works in languages that do not handle integer overflows, such as Java, C, and C++. Java has a number of bitwise operators. XOR (denoted by ^) is one of them.

14. What do you understand by the Asymptotic Notations?

Asymptotic analysis is a technique that is used for determining the efficiency of an algorithm that does not rely on machine-specific constants and avoids the algorithm from comparing itself to the time-consuming approach. For asymptotic analysis, asymptotic notation is a mathematical technique that is used to indicate the temporal complexity of algorithms.

The following are the three most common asymptotic notations.

  • Big Theta Notation: (θ Notation) The exact asymptotic behaviour is defined using the theta (θ) Notation. It binds functions from above and below to define behaviour. Dropping low order terms and ignoring leading constants is a convenient approach to get Theta notation for an expression.

algorithm problem solving example

  • Big O Notation: The Big O notation defines an upper bound for an algorithm by bounding a function from above. Consider the situation of insertion sort: in the best case scenario, it takes linear time, and in the worst case, it takes quadratic time. Insertion sort has a time complexity O(n^2). It is useful when we just have an upper constraint on an algorithm's time complexity.

algorithm problem solving example

  • Big Omega (Ω) Notation: The Ω Notation provides an asymptotic lower bound on a function, just like Big O notation does. It is useful when we have a lower bound on an algorithm's time complexity.

algorithm problem solving example

15. What do you understand by the best case, worst case and average case scenario of an algorithm?

The mathematical foundation/framing of an algorithm's run time performance is defined by asymptotic analysis. We can easily determine the best case, average case, and worst-case scenarios of an algorithm using asymptotic analysis.

  • Best Case Scenario of an Algorithm: The best-case scenario for an algorithm is defined as the data arrangement in which the algorithm performs the best. Take a binary search, for example, where the best-case scenario is if the target value is in the very centre of the data we are looking for. The best-case scenario for binary search would have a time complexity of O(1) or constant time complexity.
  • Worst Case Scenario of an Algorithm: The worst collection of input for a given algorithm is referred to as the worst-case scenario of an Algorithm. For example, quicksort can perform poorly if the pivot value is set to the largest or smallest element of a sublist. Quicksort will degenerate into an algorithm with a time complexity of O(n^2), where n is the size of the list to be sorted.
  • Average Case Scenario of an Algorithm: The average-case complexity of an algorithm is the amount of some computational resource (usually time) used by the process, averaged over all possible inputs, according to computational complexity theory. For example, the average-case complexity of the randomised quicksort algorithm is O(n*log(n)), where n is the size of the list to be sorted.

Algorithm Interview Questions for Experienced

1. how do the encryption algorithms work.

e process of transforming plaintext into a secret code format known as "Ciphertext'' is known as encryption. For calculations, this technique uses a string of bits known as "keys" to convert the text. The larger the key, the more potential patterns for producing ciphertext there are. The majority of encryption algorithms use fixed blocks of input with lengths ranging from 64 to 128 bits, while others use the stream technique.

2. What is the space complexity of the selection sort algorithm?

Selection sort is an in-place sorting method, which implies it does not require any additional or minimal data storage. Therefore, the selection sort algorithm has a constant space complexity or O(1) space complexity.

So, in conclusion, we would like to convey to our readers that the Algorithm Interviews are usually the most crucial and tough interviews of all in the Recruitment process of a lot of Software Companies and a sound understanding of Algorithms usually implies that the candidate is very good in logical thinking and has the ability to think out of the box. Algorithm interview questions can be easily solved if one has a sound understanding of Algorithms and has gone through a lot of Algorithm Examples and Algorithm MCQs (which we will be covering in the next section of this article). Therefore, we suggest to all the budding coders of today to develop a strong grasp on the various Algorithms that have been discovered to date so that they can ace their next Technical Interviews.

Useful Resources:

  • Data Structures and Algorithms
  • Data Structures Interview Questions
  • Best Data Structures and Algorithms Books
  • Best Courses for Data Structures and Algorithms
  • Data Structure MCQ With Answers
  • Technical Interview Questions
  • Coding Interview Questions

3. What is the space complexity of the insertion sort algorithm?

Insertion sort is an in-place sorting method, which implies it does not require any additional or minimal data storage. In insertion sort, only a single list element must be stored outside of the starting data, resulting in a constant space complexity or O(1) space complexity.

4. Describe the heap sort algorithm.

Heap sort is a comparison-based sorting algorithm. Heapsort is similar to selection sort in that it separates its input into a sorted and an unsorted region, then successively decreases the unsorted part by taking the largest element from it and putting it into the sorted region. Unlike selection sort, heapsort does not waste time scanning the unsorted region in linear time; instead, heap sort keeps the unsorted region in a heap data structure to identify the largest element in each step more rapidly. Let us take a look at the heap sort algorithm:

The Heapsort algorithm starts by converting the list to a max heap. The algorithm then swaps the first and last values in the list, reducing the range of values considered in the heap operation by one, and filters the new first value into its heap place. This process is repeated until the range of values considered is only one value long.

  • On the list, use the buildMaxHeap() function. This function, also known as heapify(), creates a heap from a list in O(n) operations.
  • Change the order of the list's first and last elements. Reduce the list's considered range by one.
  • To sift the new initial member to its appropriate index in the heap, use the siftDown() function on the list.
  • Unless the list's considered range is one element, proceed to step 2.
Note: The buildMaxHeap() operation runs only one time with a linear time complexity or O(n) time complexity. The siftDown() function works in O(log n) time complexity, and is called n times. Therefore, the overall time complexity of the heap sort algorithm is O(n + n log (n)) = O(n log n).

5. Define tree traversal and list some of the algorithms to traverse a binary tree.

The process of visiting all the nodes of a tree is known as tree traversal.

Some of the algorithms to traverse a binary tree are as follows:

  • Pre-order Traversal.
  • In order Traversal.
  • Post order Traversal.
  • Breadth First Search
  • ZigZag Traversal.

6. Define insertion sort and selection sort.

  • Insertion sort: Insertion sort separates the list into sorted and unsorted sub-lists. It inserts one element at a time into the proper spot in the sorted sub-list. After insertion, the output is a sorted sub-list. It iteratively works on all the elements of an unsorted sub-list and inserts them into a sorted sub-list in order.
  • Selection sort: Selection sort is an in-place sorting technique. It separates the data collection into sorted and unsorted sub-lists. The minimum element from the unsorted sub-list is then selected and placed in the sorted list. This loops until all of the elements in the unsorted sub-list have been consumed by the sorted sub-list.
Note: Both sorting strategies keep two sub-lists, sorted and unsorted, and place one element at a time into the sorted sub-list. Insertion sort takes the currently selected element and places it in the sorted array at the right point while keeping the insertion sort attributes. Selection sort, on the other hand, looks for the smallest element in an unsorted sub-list and replaces it with the current element.

7. Devise an algorithm to insert a node in a Binary Search Tree.

An algorithm to insert a node in a Binary Search Tree is given below:

  • Assign the current node to the root node.
  • If the root node has a left child, go to the left.
  • Insert node here if it does not have a left child.
  • If the root node has a right child, go to the right.
  • Insert node here if it does not have the right child.

8. What are recursive algorithms? State the important rules which every recursive algorithm must follow.

Recursive algorithm is a way of tackling a difficult problem by breaking it down into smaller and smaller subproblems until the problem is small enough to be solved quickly. It usually involves a function that calls itself (property of recursive functions).

The three laws which must be followed by all recursive algorithms are as follows:

  • There should be a base case.
  • It is necessary for a recursive algorithm to call itself.
  • The state of a recursive algorithm must be changed in order for it to return to the base case.

9. Can we use the binary search algorithm for linked lists? Justify your answer.

No, we cannot use the binary search algorithm for linked lists.  Explanation: Because random access is not allowed in linked lists, reaching the middle element in constant or O(1) time is impossible. As a result, the usage of a binary search algorithm on a linked list is not possible.

10. Explain the Dijkstra's Algorithm to find the shortest path between a given node in a graph to any other node in the graph.

Dijkstra's algorithm is a method for determining the shortest pathways between nodes in a graph, which might be used to depict road networks. Edsger W. Dijkstra, a computer scientist, conceived it in 1956 and published it three years later. There are numerous variations of the algorithm. The original Dijkstra algorithm discovered the shortest path between two nodes, but a more frequent form fixes a single node as the "source" node and finds the shortest pathways from the source to all other nodes in the network, resulting in a shortest-path tree. Let us take a look at Dijkstra's Algorithm to find the shortest path between a given node in a graph to any other node in the graph:

Let us call the node where we are starting the process as the initial node. Let the distance from the initial node to Y be the distance of node Y. Dijkstra's algorithm will begin with unlimited distances and attempt to improve them incrementally.

  • Step 1: Mark all nodes that have not been visited yet. The unvisited set is a collection of all the nodes that have not been visited yet.
  • Step 2: Assign a tentative distance value to each node: set it to zero for our first node and infinity for all others. The length of the shortest path discovered so far between the node v and the initial node is the tentative distance of a node v. Because no other vertex other than the source (which is a path of length zero) is known at the start, all other tentative distances are set to infinity. Set the current node to the beginning node.
  • Step 3: Consider all of the current node's unvisited neighbours and determine their approximate distances through the current node. Compare the newly calculated tentative distance to the current assigned value and choose the one that is less. If the present node A has a distance of 5 and the edge linking it to a neighbour B has a length of 3, the distance to B through A will be 5 +3 = 8. Change B to 8 if it was previously marked with a distance greater than 8. If this is not the case, the current value will be retained.
  • Step 4: Mark the current node as visited and remove it from the unvisited set once we have considered all of the current node's unvisited neighbours. A node that has been visited will never be checked again.
  • Stop if the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance between the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and the remaining unvisited nodes). The algorithm is now complete.
  • Step 5: Otherwise, return to step 3 and select the unvisited node indicated with the shortest tentative distance as the new current node.

algorithm problem solving example

It is not required to wait until the target node is "visited" as described above while constructing a route: the algorithm can end once the destination node has the least tentative distance among all "unvisited" nodes (and thus could be selected as the next "current"). For arbitrary directed graphs with unbounded non-negative weights, Dijkstra's algorithm is asymptotically the fastest known single-source shortest path algorithm with time complexity of O(|E| + |V|log(|V|)), where  |V| is the number of nodes and|E| is the number of edges in the graph.

11. Write an algorithm to find the maximum subarray sum for a given array. In other words, find the maximum sum that can be achieved by taking contiguous elements from a given array of integers.

Kadane's algorithm can be used to find the maximum subarray sum for a given array.  From left to right, Kadane's algorithm searches the provided array. It then computes the subarray with the largest sum ending at position j in the jth step, and this sum is stored in the variable "currentSum". Furthermore, it computes the subarray with the biggest sum anywhere in the subarray starting from the first position to the jth position, that is, in A[1...j], and stores it in the variable "bestSum". This is done by taking the maximum value of the variable "currentSum" till now and then storing it in the variable "bestSum".  In the end, the value of "bestSum" is returned as the final answer to our problem.

Formally, Kadane's algorithm can be stated as follows:

  • bestSum = INT_MIN
  • currentSum = 0 // for empty subarray, it is initialized as value 0
  • (a) currentSum  = currentSum  + A[i]
  • (b) if(bestSum < currentSum)            bestSum = currentSum 
  • (c) if(currentSum  < 0)            currentSum = 0
  • Step 3: return bestSum

12. Describe the bubble sort algorithm with the help of an example.

Bubble sort, also known as sinking sort, is a basic sorting algorithm that iterates through a list, comparing neighbouring elements and swapping them if they are out of order. The list is sent through again and again until it is sorted. The comparison sort method is named from the manner that smaller or larger components "bubble" to the top of the list. This simplistic method performs badly in real-world situations and is mostly used as a teaching aid. Let us take an example to understand how bubble sort works:

Let us assume that the array to be sorted is (50 10 40 20 80). The various passes or rounds of bubble sort are given below:

  • (50 10 40 20 80) –> ( 10 50 40 20 80 ), Since 50 > 10, the algorithm compares the first two elements and swaps them.
  • ( 10 50 40 20 80 ) –>  ( 10 40 50 20 80 ), Since 50 > 40, the algorithm swaps the values at the second and third positions.
  • (10 40 50 20 80) –> (10 40 20 50 80), Since 50 > 3, the algorithm swaps the third and fourth elements.
  • (10 40 20 50 80) -> ( 10 40 20 50 80 ), The method does not swap the fourth and fifth elements because they are already in order (80 > 50).
  • ( 10 40 20 50 80 ) –> ( 10 40 20 50 80 ) , Elements at first and second position are in order so now swapping.
  • ( 10 40 20 50 80 ) –> ( 10 20 40 50 80 ), Since 40 > 20, the algorithm swaps the values at the second and third positions.
  • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth position are in order so now swapping.
  • ( 10 20 40 50 80 ) –>  ( 10 20 40 50 80 ), Elements at fourth and fifth position are in order so now swapping.

The array is now sorted, but our algorithm is unsure whether it is complete. To know if the algorithm is sorted, it must complete one complete pass without any swaps.

  • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the first and second position are in order so now swapping. 
  • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the second and third position are in order so now swapping. 
  • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth position are in order so now swapping. 
  • ( 10 20 40 50 80 ) –> ( 10 20 40 5 80 ), Elements at the fourth and fifth position are in order so now swapping.

13. Describe the quick sort algorithm.

Quicksort is a sorting algorithm that is in place (in-place algorithm is an algorithm that transforms input using no auxiliary data structure). It was created by the British computer scientist Tony Hoare in 1959 and was published in 1961, and it is still a popular sorting algorithm. It can be somewhat quicker than merge sort and two or three times faster than heapsort when properly done. 

Quicksort is based on the divide and conquer algorithmic paradigm. It operates by picking a 'pivot' element from the array and separating the other elements into two subarrays based on whether they are greater or less than the pivot. As a result, it is also known as partition exchange sort. The subarrays are then recursively sorted. This can be done in place, with only a little amount of additional RAM (Random Access Memory) required for sorting. 

Quicksort is a comparison sorting algorithm, which means it can sort objects of any type that have a "less-than" relation (technically, a total order) declared for them. Quicksort is not a stable sort, which means that the relative order of equal sort items is not retained in efficient implementations. Quicksort (like the partition method) must be written in such a way that it can be called for a range within a bigger array, even if the end purpose is to sort the entire array, due to its recursive nature. 

The following are the steps for in-place quicksort:

  • If there are less than two elements in the range, return immediately because there is nothing else to do. A special-purpose sorting algorithm may be used for other very small lengths, and the rest of these stages may be avoided.
  • Otherwise, choose a pivot value, which is a value that occurs in the range (the precise manner of choice depends on the partition routine, and can involve randomness).
  • Partition the range by reordering its elements while determining a point of division so that all elements with values less than the pivot appear before the division and all elements with values greater than the pivot appear after it; elements with values equal to the pivot can appear in either direction. Most partition procedures ensure that the value that ends up at the point of division is equal to the pivot, and is now in its ultimate location because at least one instance of the pivot is present (but termination of quicksort does not depend on this, as long as sub-ranges strictly smaller than the original are produced).
  • Apply the quicksort recursively to the sub-range up to the point of division and the sub-range after it, optionally removing the element equal to the pivot at the point of division from both ranges. (If the partition creates a potentially bigger sub-range near the boundary with all elements known to be equal to the pivot, these can also be omitted.)

Quicksort's mathematical analysis reveals that, on average, it takes O(nlog (n) time complexity to sort n items. In the worst-case scenario, it performs in time complexity of O(n^2).

Note: The algorithm's performance can be influenced by the partition routine (including the pivot selection) and other details not fully defined above, possibly to a large extent for specific input arrays. It is therefore crucial to define these alternatives before discussing quicksort's efficiency. 

14. Describe the merge sort algorithm.

Merge sort (also known as mergesort) is a general-purpose, comparison-based sorting algorithm developed in computer science. The majority of its implementations result in a stable sort, which indicates that the order of equal elements in the input and output is the same. In 1945, John von Neumann devised the merge sort method, which is a divide and conquer algorithm. The following is how a merge sort works conceptually:

  • Separate the unsorted list into n sublists, each with one element (a list of one element is considered sorted).
  • Merge sublists repeatedly to create new sorted sublists until only one sublist remains. The sorted list will be displayed then.

The time complexity of the Merge Sort Algorithm is O(nlog(n)) where n is the size of the list of the elements to be sorted while the space complexity of the Merge Sort Algorithm is O(n), that is, linear space complexity.

algorithm problem solving example

15. What are few of the most widely used cryptographic algorithms?

A few of the most widely used cryptographic algorithms are as follows:

  • DES and Triple DES.

Algorithm MCQ

Which of the following algorithms can be used to solve the Maximum Subarray Problem?

Which of the following algorithmic paradigms is used to implement sorting in the merge sort algorithm?

Bubble Sort's average case time complexity is: (Assume that the list which is being sorted is of size n)

A C ++ code snippet is given to you. Find out the time complexity of the given code snippet:

Which of the following is a correct implementation of recursive linear search?

When can we say that linear searching is better than the other searching algorithms?

A C ++ code snippet is given to you. Find out the time and space complexity of the given code snippet:

The recurrence relation for the code of bubble sort implemented recursively is?

Which of the following statements regarding merge sort is not true?

What is the worst case time complexity for linear search? (Assume the size of the list on which linear search is done is n)

Given an array of elements {15, 20, 47, 35} which is being sorted using the bubble sort algorithm. In how many iterations will the array be sorted?

What is the best case time complexity for linear search? (Assume the size of the list on which linear search is done is n)

  • Privacy Policy

instagram-icon

  • Practice Questions
  • Programming
  • System Design
  • Fast Track Courses
  • Online Interviewbit Compilers
  • Online C Compiler
  • Online C++ Compiler
  • Online Java Compiler
  • Online Javascript Compiler
  • Online Python Compiler
  • Interview Preparation
  • Java Interview Questions
  • Sql Interview Questions
  • Python Interview Questions
  • Javascript Interview Questions
  • Angular Interview Questions
  • Networking Interview Questions
  • Selenium Interview Questions
  • Data Structure Interview Questions
  • Data Science Interview Questions
  • System Design Interview Questions
  • Hr Interview Questions
  • Html Interview Questions
  • C Interview Questions
  • Amazon Interview Questions
  • Facebook Interview Questions
  • Google Interview Questions
  • Tcs Interview Questions
  • Accenture Interview Questions
  • Infosys Interview Questions
  • Capgemini Interview Questions
  • Wipro Interview Questions
  • Cognizant Interview Questions
  • Deloitte Interview Questions
  • Zoho Interview Questions
  • Hcl Interview Questions
  • Highest Paying Jobs In India
  • Exciting C Projects Ideas With Source Code
  • Top Java 8 Features
  • Angular Vs React
  • 10 Best Data Structures And Algorithms Books
  • Best Full Stack Developer Courses
  • Best Data Science Courses
  • Python Commands List
  • Data Scientist Salary
  • Maximum Subarray Sum Kadane’s Algorithm
  • Python Cheat Sheet
  • C++ Cheat Sheet
  • Javascript Cheat Sheet
  • Git Cheat Sheet
  • Java Cheat Sheet
  • Data Structure Mcq
  • C Programming Mcq
  • Javascript Mcq

1 Million +

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 19 August 2024

A many-objective evolutionary algorithm based on three states for solving many-objective optimization problem

  • Jiale Zhao 1 , 4 ,
  • Huijie Zhang 3 ,
  • Huanhuan Yu 2 , 4 ,
  • Hansheng Fei 2 , 4 ,
  • Xiangdang Huang 2 , 4 &
  • Qiuling Yang 2 , 4  

Scientific Reports volume  14 , Article number:  19140 ( 2024 ) Cite this article

162 Accesses

Metrics details

  • Computational models
  • Computational neuroscience
  • Computational science
  • Computer science
  • Machine learning

In recent years, researchers have taken the many-objective optimization algorithm, which can optimize 5, 8, 10, 15, 20 objective functions simultaneously, as a new research topic. However, the current research on many-objective optimization technology also encounters some challenges. For example: Pareto resistance phenomenon, difficult diversity maintenance. Based on the above problems, this paper proposes a many-objective evolutionary algorithm based on three states (MOEA/TS). Firstly, a feature extraction operator is proposed. It can extract the features of the high-quality solution set, and then assist the evolution of the current individual. Secondly, based on Pareto front layer, the concept of “individual importance degree” is proposed. The importance degree of an individual can reflect the importance of the individual in the same Pareto front layer, so as to further distinguish the advantages and disadvantages of different individuals in the same front layer. Then, a repulsion field method is proposed. The diversity of the population in the objective space is maintained by the repulsion field, so that the population can be evenly distributed on the real Pareto front. Finally, a new concurrent algorithm framework is designed. In the algorithm framework, the algorithm is divided into three states, and each state focuses on a specific task. The population can switch freely among these three states according to its own evolution. The MOEA/TS algorithm is compared with 7 advanced many-objective optimization algorithms. The experimental results show that the MOEA/TS algorithm is more competitive in many-objective optimization problems.

Similar content being viewed by others

algorithm problem solving example

Hybrid selection based multi/many-objective evolutionary algorithm

algorithm problem solving example

A new optimization algorithm to solve multi-objective problems

algorithm problem solving example

A multi-population multi-stage adaptive weighted large-scale multi-objective optimization algorithm framework

Introduction.

In reality, many optimization problems involve multiple conflicting objectives, such as the design of urban public transport routes 1 , production scheduling 2 , securities portfolio management 3 and so on. These types of optimization problems are called multi-objective optimization problems (MOPs). This means that there is no one solution to make all the objectives reach the optimum simultaneously, that is, the optimization of one objective may lead to the deterioration of other objectives 4 , 5 . Consequently, the solutions of MOPs are usually a set of compromise solutions that weigh all objectives. The definition of MOPs is as follows:

Among them, \(f\left(x\right)\) is the m-dimensional objective vector, which contains m conflicting objective functions; \({f}_{i}\left(x\right)\) represents the i-th objective function; x represents the n-dimensional decision variable; \(\Omega \) represents decision space; R m represents the objective space.

In the field of multi-objective optimization, problems with 2 or 3 optimization objectives are called general multi-objective optimization problems (GMOPs). Problems with more than 3 optimization objectives are called many-objective optimization problems (MaOPs) 6 , 7 , 8 . GMOPs aren’t the focus of our attention, as there have been many reports about GMOPs 9 , 10 . On the contrary, MaOPs are the focus of our attention, as there are still some challenges to be solved. The fundamental difference between GMOPs and MaOPs is the number of optimization objectives. Assuming that the number of optimization objectives is m, the probability that one individual dominates another is \(1/{2}^{m-1}\) in theory 11 , 12 . This means that with the increase of the number of optimization objectives, traditional Pareto dominance will fail, Pareto resistance will occur, and most multi-objective optimization algorithms will lose selection pressure in terms of convergence.

In recent years, with the research and exploration of MaOPs, many-objective optimization technology has been developed to a certain extent, and basically 4 mainstream many-objective optimization algorithms have been formed 13 . The first is many-objective optimization algorithm based on dominance. The algorithm modifies the definition of traditional Pareto domination by domination relaxation technique to enhance the selection pressure of the algorithm in terms of convergence. \(\alpha\) -Dominance, \(\upepsilon \) -Dominance and Cone \(\upepsilon \) -Dominance are all common domination relaxation techniques. Compared with traditional Pareto dominance, the effectiveness of dominance relaxation technology has been reported in many works. Therefore, dominance relaxation technology has been widely used to solve MaOPs. However, the current domination relaxation technique also faces two problems: (1) With the increase of the number of optimization objectives, the effect of the domination relaxation technique is getting worse and worse; (2) The domination relaxation technique tends to make the population converge to a certain sub-region of the real Pareto front (PF).

The second is many-objective optimization algorithm based on index. The algorithm guides the selection and evolution of the population by integrating convergence and diversity into one index (such as IGD, HV). Its representative work includes: HypE, MaOEA/IGD, SMS-EMOA. However, the algorithm faces some problems when it is used to solve MaOPs, such as complex index calculation, difficult selection of reference point or reference PF.

The third is many-objective optimization algorithm based on decomposition. The algorithm transforms MaOPs into several single-objective optimization sub-problems through an aggregation function, and then drives the individuals in the neighborhood to update by neighborhood strategy, finally realizes the evolution of the whole population. Its representative work includes: MOEA/D, MOEA/D-D, MOEA/D-DU. However, many-objective optimization algorithm based on decomposition is only suitable for MaOPs with regular PF (such as the DTLZ1 problem). When dealing with MaOPs with irregular PF, many-objective optimization algorithm based on decomposition often performs poorly.

The fourth is many-objective optimization algorithm based on hybrid strategy. The algorithm adopts different search strategies in different environments (different stages or different sub-populations), and uses the advantages of their respective search strategies to deal with complex MaOPs. Its representative work includes: AHM, eMOFEOA, CPSO. In many reports, many-objective optimization algorithm based on hybrid strategy is more suitable for solving MaOPs.

According to the above analysis, this paper considers using the many-objective optimization algorithm based on hybrid strategy, and further proposes the many-objective evolutionary algorithm based on three states (MOEA/TS). The innovations and contributions of this paper are as follows: (1) A feature extraction operator is proposed. The feature extraction operator is a feature extractor, which can extract the features of the high-quality solution set, and then assist the evolution of the current individual. (2) Based on the Pareto front layer, the concept of “individual importance degree” is proposed. The importance degree of an individual can reflect the importance of the individual in the same Pareto front layer, so as to further distinguish the advantages and disadvantages of different individuals in the same front layer, and effectively solve the phenomenon of Pareto resistance. (3) A repulsion field method is proposed. The repulsion field is used to maintain the diversity of the population in the objective space, so that the population can be evenly distributed on the real PF. (4) Design a new concurrent algorithm framework. In the framework, the algorithm is divided into three states, and each state focuses on a specific task. The population can freely switch among these three states according to its own evolution.

The remainder of this paper is organized as follows: Sect. " Preparatory work " introduces the basic definition, related work and research motivation. Sect. " Basic definition " introduces each part of the MOEA/TS algorithm in detail. Sect. " Related work " introduces the test results of MOEA/TS algorithm and 7 advanced many-objective optimization algorithms on various test problems, and then analyzes and summarizes them according to the test results. Sect. " Many-objective optimization algorithm based on dominance "summarizes this article and looks forward to future work.

Preparatory work

Basic definition.

In this section, we will introduce some basic definitions related to many-objective optimization technology.

Definition of dominance: if solution x isn’t worse than solution y in all objectives and solution x is better than solution y in at least one objective, it is said that x dominates y . That is, if \(\forall i\in \left\{\text{1,2},3,...,m\right\}\) satisfies \({f}_{i}\left(x\right)\le {f}_{i}\left(y\right)\) and \(\exists j\in \left\{\text{1,2},3,...,m\right\}\) satisfies \({f}_{j}\left(x\right)<{f}_{j}\left(y\right)\) , it is said that x dominates y .

Definition of non-dominated solution: if there are no solutions that can dominate x in the decision space, then x is called a Pareto optimal solution or a non-dominated solution. That is, if \(\nexists {x}^{*}\in \Omega \) makes x* dominate x , then x is called a Pareto optimal solution or a non-dominated solution.

Definition of Pareto optimal solution set: the set composed of Pareto optimal solutions is called the Pareto optimal solution set (PS). The mathematical description of PS is as follows:

Definition of Pareto front: the mapping of PS in the objective space is called Pareto front (PF). The mathematical description of PF is as follows:

The goal of the many-objective optimization technology is to find a set of non-dominated solutions that are close to the real PF (convergence) and make them well distributed on the real PF (diversity).

Related work

In recent years, many scholars have conducted in-depth research and exploration in the many-objective optimization technology.

Many-objective optimization algorithm based on dominance

Considering the limitations of Pareto dominance relationship in high-dimensional objective space, Zhou et al 14 proposed a many-objective optimization algorithm based on dominance relation selection. Firstly, they introduced an angle domination relationship with higher selection pressure based on the traditional Pareto domination relationship, and designed a new dominance selection strategy. Additionally, they proposed an angle-based individual distribution method to ensure even population distribution in the objective space. The algorithm shows strong competitiveness in solving MaOPs. Wang et al 15 believed that as the number of objectives increased, the traditional dominance relationship would become invalid. Therefore, they proposed a modified dominance relation. That is, they used penalty-based adaptive matrix regions to assist the traditional dominance relationship. Further, for MaOPs with irregular Pareto fronts, they introduced a population-based adaptive adjustment method to replace the predefined weight vector. On this basis, for MaOPs, they developed a many-objective optimization algorithm based on modified dominance relation and adaptive adjustment method. Zhang et al 16 believed that the current many-objective optimization algorithms focused too much on convergence, which would cause the population to converge to a certain sub-region of the real Pareto front. In order to solve this problem, they proposed a many-objective optimization algorithm based on double distance domination. In this algorithm, double distance can not only measure the convergence of the algorithm to adapt to different Pareto fronts, but also combine angle-based niche technology to emphasize the diversity of the algorithm. In addition, they also designed a special mutation operator. This operator can generate high-quality individuals in sparse areas to improve the diversity of the algorithm.

Many-objective optimization algorithm based on index

Aiming at the high complexity problem of hypervolume computation, Shang et al 17 proposed a new multi-objective evolutionary algorithm (MOEA) based on R2 index, namely the R2HCA-EMOA algorithm. The core idea of this algorithm is to use R2 index variables to approximate the contribution of hypervolume. The basic framework of the proposed algorithm is similar to that of SMS-EMOA. In order to improve the calculation efficiency of the algorithm, the utility tensor structure is introduced to calculate R2 index variables. In addition, the normalization mechanism is incorporated into the R2HCA-EMOA algorithm to improve its performance. Zhang et al 18 believed that the loss of selection pressure was the core reason for the poor performance of the algorithm. In order to solve this problem, they proposed a many-objective optimization algorithm based on fitness evaluation and hierarchical grouping. The fitness evaluation method combined the convergence measure based on the cos function and the diversity measure based on angle to create the selection pressure of convergence and diversity. In order to further strengthen the selection pressure, they proposed a hierarchical grouping strategy. Firstly, individuals are divided into different layers by front index, and then individuals in the same layer are divided into different groups by R2 index. Although some indexes can approximate the contribution of HV, However, Nan et al 19 believed that the key of performance evaluation was to find the worst solution rather than accurately approaching the HV value of each solution. In order to improve the ability to identify the worst solution, they proposed a two-stage R2 index evaluation method. In the first stage, the R2 indexes of all individuals are roughly evaluated to select some candidate solutions. In the second stage, these candidate solutions are accurately evaluated. Finally, they proposed a many-objective optimization algorithm based on the two-stage R2 index.

Many-objective optimization algorithm based on decomposition

In order to balance the convergence and diversity of the decomposition-based algorithm and reduce its dependence on the real PF direction, Wu et al 20 developed a many-objective optimization algorithm based on antagonistic decomposition method. This method utilizes the complementary characteristics of different sub-problems in a single example. Specifically, two populations are co-evolved by two sub-problems with different contours and opposite search directions. In order to avoid allocating redundant computing resources to the same area of PF, two populations are matched into one-to-one pairing according to their working areas on PF. In mating selection, each solution pair can only contribute one parent at most. In order to improve the performance of decomposition-based algorithms, Fan et al 21 proposed a differential multi-objective optimization algorithm based on decomposition. Firstly, they designed a neighborhood intimacy factor to improve the diversity of the algorithm based on the characteristics of neighborhood search. Then, they introduced a Gaussian mutation operator with dynamic step size to enhance the algorithm’s ability to escape from local optimal regions and improve convergence. Finally, they combined a difference strategy with the decomposition-based multi-objective optimization algorithm to further strengthen its evolutionary ability. Peng et al 22 believed that data dimensionality reduction could be applied to the objective space. Based on this consideration, they proposed a many-objective optimization algorithm based on projection. Firstly, they used the idea of data dimensionality reduction and spatial decomposition to divide the objective space into projection plane and free dimension. Then, a double elite strategy was used to maintain the balance between convergence and diversity of the algorithm. Finally, the algorithm based on decomposition was used as the algorithm of free dimension to solve MaOPs.

Many-objective optimization algorithm based on hybrid strategy

Aiming at convergence problem and diversity problem of the algorithm, Sun et al 23 proposed a many-objective optimization algorithm based on two independent stages. The algorithm deals with convergence and diversity problems in two independent and successive stages. Firstly, they introduced a non-dominated dynamic weight aggregation method, which is capable of identifying the Pareto optimal solutions of MaOPs. Then, they used these solutions to learn the Pareto optimal subspace in order to solve the convergence problem. Finally, the diversity problem was solved by using reference lines in the Pareto optimal subspace. Considering the advantages of the multi-objective and multi-population (MPMO) framework in solving MaOPs, Yang et al 24 proposed an algorithm based on the MPMO framework. The algorithm adopts the deviation sorting (BS) method to solve MaOPs, so as to obtain good convergence and diversity. In terms of convergence, the BS method is applied to each population in the MPMO framework, and the effect of non-dominant sorting is enhanced by the optimization objectives of the corresponding population. In terms of diversity, the maintenance method based on reference vector is used to save the diversity solutions. Aiming at the five-objective job shop scheduling problem (JSSP), Liu et al 25 proposed a new genetic algorithm based on the MPMO framework. Firstly, five populations are used to optimize five objectives, respectively. Secondly, in order to prevent each population from focusing only on its corresponding single objective, an archive sharing technology (AST) is proposed to store the elite solutions collected from five populations, so that the population can obtain the optimization information of other objectives from the archive. Thirdly, the archive updating strategy (AUS) is proposed to further improve the quality of the solutions in the archive.

Research motivation

Based on the related work, we believe that there are still the following problems in the current many-objective optimization technology:

(1) The diversity and convergence of the algorithm are difficult to balance. Most algorithms can’t coordinate the balance between them well, and they either emphasize convergence or diversity too much, which leads to poor quality of the non-dominated solution set.

(2) It is difficult to maintain the convergence of the algorithm. When the number of optimization objectives is large, the algorithm will produce Pareto resistance, and the traditional Pareto dominance may fail.

(3) It is difficult to maintain the diversity of the algorithm. Especially when the real PF is complex or the latitude of the objective space is high, individuals may have the clustering effect, and the population may not be evenly distributed on the real PF.

(4) The evolution efficiency of the algorithm is low. The traditional evolution operators have strong randomness and low evolution efficiency, and aren’t suitable for dealing with MaOPs.

Therefore, solving these problems and providing a good many-objective optimization algorithm constitute the research motivation of this paper.

For problem 1, some work attempts to separate the convergence optimization and diversity optimization of the algorithm, thus designing a concurrent algorithm architecture. Concurrent algorithm architecture means that only one of convergence or diversity is considered in one iteration instead of considering both convergence and diversity simultaneously. In order to solve GMOPs, Professor Ye Tian 26 tried to design a concurrent algorithm architecture and proposed the MSEA algorithm, and the experimental results were satisfactory. Therefore, it seems to be a feasible path to solve MaOPs by using concurrent algorithm architecture. However, recent research 23 shows that in MaOPs, the concurrent algorithm architecture seems to be unstable, and the experimental results fluctuate greatly (such as MaOEA/IT algorithm). Because when the algorithm only considers the convergence of the population, it often affects the diversity of the population; Similarly, when the algorithm only considers the diversity of the population, it often affects the convergence of the population. If a coordination intermediary can be added to the concurrent algorithm architecture to alleviate the contradiction between diversity and convergence, the concurrent algorithm architecture will become stable and its superiority will be truly reflected. Based on this motivation, this paper proposes a new concurrent algorithm framework. In the new algorithm framework, the algorithm is divided into three states, namely, convergence maintenance state, diversity maintenance state and coordination state. Each state focuses on a specific task. That is, the convergence maintenance state is responsible for improving the population convergence; Diversity maintenance state is responsible for improving population diversity; the coordination state is responsible for coordinating the contradiction between diversity and convergence. The population can freely switch among these three states according to its own evolution.

For problem 2, some scholars try to modify the definition of traditional Pareto dominance by using dominance relaxation technology to enhance the selection pressure of the algorithm in terms of convergence. However, with the increase of the number of optimization objectives, the effect of dominance relaxation technology is getting worse and worse. They only focus on the modification of Pareto domination definition, but ignore the difference between objective values. If we can distinguish the importance of different individuals by using the difference between the objective values, we can further create the selection pressure of the algorithm in terms of convergence, and finally Pareto resistance will be eliminated. Therefore, based on Pareto front layer, this paper proposes the concept of “individual importance degree”. The importance degree of an individual can reflect the importance of the individual in the same Pareto front layer, so as to further distinguish the advantages and disadvantages of different individuals in the same front layer, and effectively solve the phenomenon of Pareto resistance. Obviously, compared with domination relaxation technique, individual importance degree has greater advantages.

For problem 3, the traditional diversity maintenance technology isn’t suitable for high-dimensional objective space. For instance: the niche method, the density evaluation method, and the weight vector method. In the field of microphysics, when the distance between particles is too close, repulsion will push the particles away from their neighbors. On the contrary, when the distance between particles is too great, the repulsion will decrease and the particles tend to be close to the neighboring particles. This way makes the distribution of particles present a state of mutual coordination. Based on the characteristics of particle distribution, a repulsion field method is proposed in this paper. The repulsion field is used to maintain the diversity of the population in the objective space, so that the population can be evenly distributed on the real PF.

For problem 4, traditional evolution operators aren’t suitable for dealing with MaOPs. Because traditional evolution operators have strong randomness and low evolution efficiency. For instance: the binary crossover operator, the polynomial mutation operator, and the differential evolution operator. In principal component analysis, the decomposition of the covariance matrix and correlation matrix is a very important step. By decomposing the covariance matrix or the correlation matrix, we can obtain a set of orthogonal bases. These orthogonal bases are the most important features of the original data 27 . Therefore, this paper designs a feature extraction operator based on Cholesky decomposition 28 . The feature extraction operator can be understood as a feature extractor. It can extract the features of the high-quality solution set, and then assist the evolution of the current individual. Obviously, compared with traditional evolution operators, the feature extraction operator has higher evolution efficiency.

MOEA/TS algorithm

Feature extraction operator.

The feature extraction operator is a feature extractor, which can extract the features of the high-quality solution set, and then assist the evolution of the current individual. The workflow of the feature extraction operator is shown in Fig.  1 .

figure 1

The workflow of feature extraction operator.

In the first step, W high-quality solutions \({x}^{1},{x}^{2},{x}^{3},...,{x}^{W}\) are selected from the population. These W solutions will form the high-quality solution set S.

In the second step, calculate the mean \(\overline{x}\) and covariance matrix A of the high-quality solution set S:

Among them, \({x}^{i}={\left({x}_{1}^{i},{x}_{2}^{i},{x}_{3}^{i},...,{x}_{n}^{i}\right)}^{T}, i\in (1,...,W); {x}_{j}={\left({x}_{j}^{1},{x}_{j}^{2},{x}_{j}^{3},...,{x}_{j}^{W}\right)}^{T}, j\in (1,...,n)\)

In the third step, Cholesky decomposition is performed on the covariance matrix A. That is, the covariance matrix A is decomposed into the product of the lower triangular matrix and the transposition of the lower triangular matrix. Assuming that the lower triangular matrix is L, there is

Through formula \(A=L*{L}^{T}\) , we can calculate \({a}_{11}={l}_{11}^{2}\) , that is, \({l}_{11}=\sqrt{{a}_{11}}\) . Then, according to \({a}_{i1}={l}_{i1}*{l}_{11}\) , we can get \({l}_{i1}={a}_{i1}/{l}_{11}\) , so we can get the first column element of matrix L .

Assuming that we have calculated the first k-1 column elements of the matrix L. Through

In this way, we can solve the k-th column element of matrix L through the first k-1 column elements of matrix L. Then, we can solve matrix L by recursion.

In the fourth step, the sampling vector \(s={\left({s}_{1},...,{s}_{n}\right)}^{T}\) is generated by Gaussian distribution \(N\left(\text{0,0.7}\right)\) . Then, a feature solution is generated.

Among them, \({x}^{feature}={\left({x}_{1}^{feature},...,{x}_{n}^{feature}\right)}^{T}\)

It should be noted that the standard deviation std is an important parameter of the Gaussian distribution. In this paper, the standard deviation std is set to 0.7. The parameter analysis verifies that 0.7 is a reasonable standard deviation. For more details on parameter analysis, please browse the experiment chapter (Parameter sensitivity analysis section).

In the fifth step, assuming that the selected individual is \({x}^{i}({x}_{1}^{i},...,{x}_{n}^{i})\) . Based on binary crossover operator 29 and feature solution, the formula of generating offspring individual is as follows:

Among them, \({c({c}_{1},...,{c}_{n})}^{T}\) is the offspring individual. \({\beta }_{k}\) is dynamically determined by the feature factor \(\mu \) :

Among them, \(rand\) is used to generate a random number between 0 and 1; \(r and i(\text{0,1})\) is used to generate 0 or 1 randomly.

For the design principle of formula ( 15 ), please browse the Supplementary Information Document.

In the sixth step, the individual \(c\) is detected and repaired. When some components in individual \(c\) exceed the upper bound or lower bound, these components need to be repaired. The repair formula is as follows:

Among them, \({c}_{i}^{u}\) , \({c}_{i}^{l}\) represent the upper bound and lower bound of the i-th component of individual \(c\) , respectively. \({{c}{\prime}({c}_{1}{\prime},...,{c}_{n}{\prime})}^{T}\) represents the repaired individual.

Individual importance degree based on the Pareto front layer

When the number of optimization objectives is large, the algorithm will produce Pareto resistance, and the traditional Pareto dominance may fail. Some scholars try to modify the definition of traditional Pareto dominance by using dominance relaxation technology to enhance the selection pressure of the algorithm in terms of convergence. However, with the increase of the number of optimization objectives, the effect of dominance relaxation technology is getting worse and worse. They only focus on the modification of Pareto domination definition, but ignore the difference between objective values. Figure  2 shows 4 non-dominant individuals. Among them, individual B is the closest to Origin O, individual C is second, individual A is third, and individual D is the farthest from Origin O. This means that in the population, individual B is the most important, individual C is the second most important, individual A is the third most important, and individual D is the least important. In addition, we can also find from Fig.  2 that there is a significant difference between the objective values of individual B and the objective values of other individuals, that is, \(\sum_{X\in \left\{A,C,D\right\}}\sum_{i=1}^{2}{f}_{i}(B)-{f}_{i}(X)\) is the smallest. This shows that there is a special relationship between the importance of individuals and the difference of the objective values. Based on this discovery, if we can distinguish the importance of different individuals by using the difference between the objective values, we can further create the selection pressure of the algorithm in terms of convergence, and finally Pareto resistance will be eliminated. Therefore, based on Pareto front layer, we propose the concept of “individual importance degree”.

figure 2

Schematic diagram of individual importance.

Assuming that there are n solutions in a certain Pareto front layer, the objective function values of these solutions are normalized to [0,1] based on the maximum and minimum values of each objective function. \({f}_{k}{\prime}({x}^{i})\) represents the k-th normalized objective function value of individual \({x}^{i}\) .

Define the Pareto dominance function

The trend of the Pareto dominance function is shown in Fig.  3 .

figure 3

The trend of the Pareto dominance function.

Pareto dominance function can be used to reflect the dominance degree among different individuals. For example, \(PDF({f}_{k}{\prime}({x}^{i})-{f}_{k}{\prime}({x}^{j}))\) represents the dominance degree of individual \({x}^{i}\) to individual \({x}^{j}\) on the k-th objective function; \(PDF({f}_{k}{\prime}({x}^{j})-{f}_{k}{\prime}({x}^{i}))\) represents the dominance degree of individual \({x}^{j}\) to individual \({x}^{i}\) on the k-th objective function; Obviously, the greater the dominance degree, the better one individual is than another on one objective function. Therefore, on one objective function, the dominance degree of one individual to another can be expressed as:

On this basis, the dominance degree of one individual to another can be expressed as:

Further, the importance degree of one individual to another can be expressed as:

Importance degree can indicate the importance of one individual to another. The greater the importance degree, the more important one individual is than another.

Since a certain Pareto front layer has n solutions, each solution needs to be compared with other n-1 solutions, so as to construct n-1 competing pairs. Assuming that an individual is \({x}^{i}\) , then the n-1 competing pairs are \(\left({x}^{i},{x}^{1}\right),\left({x}^{i},{x}^{2}\right),...,\left({x}^{i},{x}^{j}\right),...,\left({x}^{i},{x}^{n}\right)\) , respectively (note: \(i\ne j\) ). Thus, the importance degree of individual \({x}^{i}\) to the other n-1 individuals is \(Imp\left({x}^{i},{x}^{1}\right),Imp\left({x}^{i},{x}^{2}\right),...,Imp\left({x}^{i},{x}^{j}\right),...,Imp\left({x}^{i},{x}^{n}\right)\) , respectively (note: \(i\ne j\) ).

Finally, the importance degree of the individual \({x}^{i}\) can be expressed as:

The importance degree of one individual can reflect the importance of the individual in the same Pareto front layer, so as to further distinguish the advantages and disadvantages of different individuals in the same front layer. The greater the importance degree of one individual, the more important it is in the same Pareto front layer.

Figure 4 shows the use of individual importance degree. Firstly, based on a certain Pareto front layer, the competition pools and competition pairs are constructed. Then, the individual importance degree of different individuals is calculated by Formula ( 24 ). Finally, the importance of different individuals in the same Pareto front layer is obtained.

figure 4

The use of individual importance degree.

Taking the two-objective optimization problem as an example, it is assumed that there are 4 non-dominant individuals. They are \(A\left(\text{17,5}\right)\) , \(B\left(\text{9,7}\right)\) , \(C\left(\text{7,15}\right)\) and \(D\left(\text{5,25}\right)\) , respectively. It means that these 4 individuals belong to the first non-dominant layer, and their advantages and disadvantages can’t be compared by the non-dominant rank. The distribution of 4 individuals in the objective space is shown in Fig.  5 (a).

figure 5

The distribution of 4 individuals.

In order to better compare the advantages and disadvantages of these 4 individuals, we use the individual importance degree to deal with these 4 individuals. Firstly, the objective function values of these 4 individuals are normalized to [0,1]. After normalization, the coordinates of these 4 individuals are \(A\left(\text{1,0}\right)\) , \(B\left(\text{0.333,0.1}\right)\) , \(C\left(\text{0.167,0.5}\right)\) and \(D\left(\text{0,1}\right)\) , respectively. The distribution of 4 individuals in the normalized objective space is shown in Fig.  5 (b). Next, according to Fig.  4 , the competition pools and competition pairs are constructed. Then, according to formula ( 22 ) and formula ( 23 ), the \(P\left({x}^{i},{x}^{j}\right)\) and \(Imp\left({x}^{i},{x}^{j}\right)\) of each competition pair are calculated. The calculation results are shown in Table 1 . Finally, according to the formula ( 24 ), the importance degree of these 4 individuals is 0.1918, 0.9488, 0.6673 and 0.1921, respectively. The results show that individual B is the most important, individual C is the second most important, individual D is the third most important, and individual A is the least important. This result is consistent with the intuitive perception that we get from Fig.  5 (b). Based on the above example, we believe that the concept of individual importance degree and related process are effective and can achieve the desired goals.

Repulsion field method

In the field of microphysics, when the distance between particles is too close, repulsion will push the particles away from their neighbors. On the contrary, when the distance between particles is too great, the repulsion will decrease and the particles tend to be close to the neighboring particles. This way makes the distribution of particles present a state of mutual coordination (As shown in Fig.  6 ). Based on the characteristics of particle distribution, a repulsion field method is proposed in this paper. The repulsion field is used to maintain the diversity of the population in the objective space, so that the population can be evenly distributed on the real PF.

figure 6

The uniform distribution of microscopic particles.

Firstly, according to the maximum and minimum values of each objective function, the objective function values of all solutions in the population are normalized to [0,1]. \({f}_{k}{\prime}({x}^{i})\) represents the k-th normalized objective function value of individual \({x}^{i}\) .

Then, a repulsion potential field with repulsion radius r is constructed around each individual. Assuming that a repulsion potential field has been constructed for individual \({x}^{i}\) , then all individuals within the repulsion potential field will be subject to the repulsion potential from individual \({x}^{i}\) . The magnitude of the repulsion potential depends on the distance between other individuals and individual \({x}^{i}\) . When other individuals are outside the repulsion potential field of individual \({x}^{i}\) , the repulsion potential is 0. When other individuals are within the repulsion potential field of individual \({x}^{i}\) , the closer the other individuals are to individual \({x}^{i}\) , the greater the repulsion potential that they obtain. Assuming that there is individual \({x}^{j}\) , then the repulsion potential that individual \({x}^{j}\) obtains is

Among them, \(\rho \) is the gain coefficient of the repulsion potential field, usually set to 1; \(r\) is the radius of the repulsion potential field; \(dis\left({x}^{j},{x}^{i}\right)\) represents the euclidean distance between individual \({x}^{j}\) and individual \({x}^{i}\) in the objective space. The formula is as follows:

Further, the repulsion \(Rep\left({x}^{j},{x}^{i}\right)\) that individual \({x}^{j}\) obtains is the negative gradient of the repulsion potential \(Repfield\left({x}^{j},{x}^{i}\right)\) . The formula is as follows:

It means that when \(dis\left({x}^{j},{x}^{i}\right)\le r\) , the smaller \(dis\left({x}^{j},{x}^{i}\right)\) is, the larger \(Rep\left({x}^{j},{x}^{i}\right)\) is. when \(dis\left({x}^{j},{x}^{i}\right)>r\) , \(Rep\left({x}^{j},{x}^{i}\right)=0\) .

Based on the repulsion potential field, the total repulsion potential that individual \({x}^{j}\) obtains is

Finally, the total repulsion that individual \({x}^{j}\) obtains is

It should be noted that the repulsion potential and repulsion proposed in this paper are both vectors. It means that repulsion potential and repulsion have both magnitude and direction. The addition of different repulsion is the vector synthesis of repulsion, rather than the pure numerical addition. This is also an obvious feature that the repulsion field method is different from other scalar function methods (such as niche method). Figure 7 shows the vector synthesis process of repulsion in a two-dimensional space environment. Among them, F SUM is the total repulsion that individual A obtains; F BA is the repulsion generated by individual B to individual A; F CA is the repulsion generated by individual C to individual A.

figure 7

The vector synthesis process of repulsion.

In the repulsion field method, the individual with large repulsion usually means that the individual is located in the multiple repulsion potential field that other individuals construct. It indicates that the individual is located in a dense area in the objective space and is close to other individuals. Therefore, individuals with large repulsion aren’t conducive to maintaining population diversity. Naturally, we hope that individuals with large repulsion can move away from dense areas in the objective space along the direction of repulsion. Based on this idea, firstly, we need to find some individuals closest to the direction of the repulsion to construct a high-quality solution set. Then, the feature extraction operator is used to extract the location features of the high-quality solution set. Finally, based on these features, individuals with large repulsion can evolve along the direction of repulsion. As shown in Fig.  8 , individual D and individual E are the individuals closest to the direction of repulsion. The feature extraction operator is used to extract the position features of these two individuals. Based on these features, individual A evolves into individual A*, which is far away from the previous dense area.

figure 8

Individual A is far away from the dense area.

It should be noted that the feature extraction operator has the randomness caused by Gaussian sampling. Therefore, the evolution of individuals also has a certain degree of randomness.

Framework of MOEA/TS algorithm

The framework of the MOEA/TS algorithm is shown in Fig.  9 . Firstly, the relevant parameters of the algorithm are initialized; secondly, judge which state the algorithm is in. If the algorithm is in the convergence maintenance state, the following steps are adopted to improve the convergence of the algorithm: (1) Randomly select the parent individual. (2) Use feature extraction operator to generate offspring individuals. (3) If the offspring individual is superior to the individual with the worst convergence in the population, the worst individual is replaced by the offspring individual. If the algorithm is in the diversity maintenance state, the following steps are adopted to improve the diversity of the algorithm: (1) Select the individual with the worst diversity in the population. (2) Use feature extraction operator to generate offspring individuals. (3) If the offspring individual is superior to the individual with the worst diversity in the population, the worst individual is replaced by the offspring individual. If the algorithm is in the coordination state, the following steps are adopted to coordinate the convergence and diversity of the algorithm: (1) Randomly select the parent individual. (2) Use the Gaussian mutation operator to generate offspring individuals. (3) If the offspring individual is superior to the parent individual in convergence and diversity, the parent individual is replaced by the offspring individual. Then, it is judged whether the algorithm has completed the i-th iteration. If the algorithm doesn’t complete the i-th iteration, the corresponding maintenance step or coordination step is re-executed. If the algorithm completes the i-th iteration, the current state of the algorithm is updated. Finally, it is judged whether the algorithm ends. If the algorithm doesn’t end, the corresponding maintenance step or coordination step is performed according to the current state of the algorithm. If the algorithm is finished, the population is output.

figure 9

The framework of MOEA/TS algorithm.

Description of MOEA/TS algorithm

Main framework.

This section describes the main framework of the MOEA/TS algorithm. The pseudo-code of the main framework of the MOEA/TS algorithm is shown in Algorithm 1. The main steps include: in line (1), initializing population P, repulsion field radius r, and state value (state=1 means that the algorithm is in convergence maintenance state, state=2 means that the algorithm is in diversity maintenance state, and state=3 means that the algorithm is in coordination state.); In line (2), the Front value, Imp value and Rep value of each solution in population P are calculated (The Front value is calculated by the fast non-dominated sorting method.); In line (3), it is judged whether the algorithm meets the termination condition (The termination condition is usually the maximum iterations.); In line (4), the count value is initialized. The count value is used to count the number of updated solutions in the i-th iteration; in lines (5)-(11), according to the current state of the algorithm, the update way of the population is selected. When state=1, the convergence of the population is updated. When state=2, the diversity of the population is updated. When state=3, the convergence and diversity of the population are coordinated; in line (12), the state value of the algorithm is updated according to the current state of the algorithm and the count value.

figure a

Main framework.

Convergence maintenance

This section mainly describes the convergence maintenance of the population. The pseudo-code of convergence maintenance is shown in Algorithm 2. The main steps include: in line (1), the algorithm enters the i-th iteration; In line (2), one parent individual is randomly selected from population P; In lines (3)–(4), based on Front, Imp, the high-quality solution set S is constructed; In line (5), the feature extraction operator is used to extract the features of the high-quality solution set S, and then assist the evolution of the parent individual; In line (6), the individual with the worst convergence in the population is found; In lines (7)–(13), if the offspring individual is superior to the individual with the worst convergence in the population, the worst individual is replaced by the offspring individual and flag is marked as 1. If the offspring individual is inferior to the individual with the worst convergence in the population, the flag is marked as 0. Among them, the flag value is used to indicate whether the population P has changed (flag=0 means that the population P hasn’t changed, flag=1 means that the population P has changed.); In lines (14)–(16), it is judged whether flag equals 1. If flag equals 1, the count value is updated, and the Front value, Imp value and Rep value of each solution in population P are updated.

figure b

Convergence maintenance.

Diversity maintenance

This section mainly describes the diversity maintenance of the population. The pseudo-code of diversity maintenance is shown in Algorithm 3. The main steps include: in line (1), the algorithm enters the i-th iteration; In line (2), the individual with the worst diversity in the population is found; In line (3), according to the direction of total repulsion that the worst individual obtains, the distance \({dis}_{j}\) can be calculated; In line (4), based on \({dis}_{j}\) , the high-quality solution set S is constructed; In line (5), the feature extraction operator is used to extract the features of the high-quality solution set S, and then assist the evolution of the worst individual; In lines (6)–(12), if the offspring individual is superior to the individual with the worst diversity in the population, the worst individual is replaced by the offspring individual and flag is marked as 1. If the offspring individual is inferior to the individual with the worst diversity in the population, the flag is marked as 0; In lines (13)–(15), it is judged whether flag equals 1. If flag equals 1, the count value is updated, and the Front value, Imp value and Rep value of each solution in population P are updated.

figure c

Diversity maintenance.

Coordination of convergence and diversity

This section mainly describes the coordination of convergence and diversity of the population. The pseudo-code of coordination of convergence and diversity is shown in Algorithm 4. The main steps include: in line (1), the algorithm enters the i-th iteration; In line (2), one parent individual is randomly selected from population P; In line (3), based on the parent individual, the Gaussian mutation operator is used to generate the offspring solution; In lines (4)–(10), if the offspring individual is superior to the parent individual in convergence and diversity, the parent individual is replaced by the offspring individual and flag is marked as 1. If the offspring individual is inferior to the parent individual in convergence and diversity, the flag is marked as 0; In lines (11)–(13), it is judged whether flag equals 1. If flag equals 1, the count value is updated, and the Front value, Imp value and Rep value of each solution in population P are updated.

figure d

Coordination.

This section mainly describes the feature extraction operator. The pseudo-code of the feature extraction operator is shown in Algorithm 5. The main steps include: in line (1), the features \(\overline{x},L\) of the high-quality solution set S are extracted by formula ( 4 ) and formula ( 13 ); In line (2), the sampling vector \(s={\left({s}_{1},...,{s}_{n}\right)}^{T}\) is generated by the Gaussian distribution \(N\left(\text{0,0.7}\right)\) ; In line (3), based on \(\text{s},\overline{x},L\) , the feature solution \({x}^{feature}\) is generated by formula ( 14 ); In line (4), based on parent, \({x}^{feature}\) , the offspring solution O is generated by formula ( 15 ).

figure e

Feature extraction.

Update of algorithm state

In this paper, the algorithm state is further updated according to the current state of the algorithm and the stability of the population. The pseudo-code of the update of the algorithm state is shown in Algorithm 6. When the algorithm is in the convergence maintenance state and the number of updated solutions in the i-th iteration is less than or equal to 5%*N, it is considered that the population tends to be stable in terms of convergence, then the algorithm turns to the diversity maintenance state; When the algorithm is in the diversity maintenance state and the number of updated solutions in the i-th iteration is less than or equal to 5%*N, it is considered that the population tends to be stable in terms of diversity, then the algorithm turns to the coordination state; When the algorithm is in the coordination state and the number of updated solutions in the i-th iteration is less than or equal to 5%*N, it is considered that the population tends to be stable in terms of coordination, then the algorithm turns to the convergence maintenance state. It should be noted that the threshold value T is a key parameter in measuring whether the population tends to be stable or not. In this paper, the threshold value T is set to 5%. The parameter analysis verifies that 5% is a reasonable threshold. For more details on parameter analysis, please browse the experiment chapter (Parameter sensitivity analysis section).

figure f

Determination state.

Computational complexity of one iteration of MOEA/TS algorithm

Assuming that the size of the population is N, the number of the objective function is m, the dimension of the decision variable is n, and the size of the high-quality solution set is W, then the computational complexity of Rep is O(mN2), the computational complexity of Front is O(mN2), and the computational complexity of Imp is O(mN2). The core steps of the feature extraction operator (Algorithm 5) include the construction of the covariance matrix and Cholesky decomposition. The computational complexity of covariance matrix construction is O(Wn2) and the computational complexity of Cholesky decomposition is O(n3/6). Therefore, the computational complexity of the feature extraction operator (Algorithm 5) is O(Wn2+n3/6). The core steps of convergence maintenance (Algorithm 2) include population ranking, feature extraction operator, selection of the worst individual, and updating of Front, Imp and Rep. Their computational complexity is O(N2), O(Wn2+n3/6), O(N), O(mN2), O(mN2), O(mN2), respectively. Therefore, the computational complexity of convergence maintenance (Algorithm 2) is O(N(N2+Wn2+n3/6+N+3mN2)). The core steps of diversity maintenance (Algorithm 3) include selection of the worst individual, distance calculation, population ranking, feature extraction operator, and updating of Front, Imp and Rep. Their computational complexity is O(N), O(nN), O(N2), O(Wn2+n3/6), O(mN2), O(mN2), O(mN2), respectively. Therefore, the computational complexity of diversity maintenance (Algorithm 3) is O(N(N+nN+N2+Wn2+n3/6+3mN2)). The core steps of coordination of convergence and diversity (Algorithm 4) include the Gaussian mutation operator, and updating of Front, Imp and Rep. Their computational complexity is O(n), O(mN2), O(mN2), O(mN2), respectively. Therefore, the computational complexity of coordination of convergence and diversity (Algorithm 4) is O(N(n+3mN2)). The computational complexity of Determination State (Algorithm 6) is O(1). Based on the above computational complexity analysis, the computational complexity of one iteration of the MOEA/TS algorithm is max{O(N(N2+Wn2+n3/6+N+3mN2)), O(N(N+nN+N2+Wn2+n3/6+3mN2)), O(N(n+3mN2))}+O(1)≈max{O(NWn2+Nn3+mN3), O(NWn2+Nn3+mN3), O(mN3)}= O(NWn2+Nn3+mN3). In this paper, N>>max{W, n, m}. Therefore, the computational complexity of the MOEA/TS algorithm is O(mN3). As a reference algorithm, the computational complexity of the NSGA-III algorithm is O(mN2). The computational complexity of the MOEA/TS algorithm is an order of magnitude higher than that of the NSGA-III algorithm. This shows that the MOEA/TS algorithm is an expensive many-objective optimization algorithm.

It should be noted that although MOEA/TS algorithm has a higher computational complexity. But compared with the NSGA-III algorithm, the MOEA/TS algorithm also has greater advantages. In terms of convergence optimization, the NSGA-III algorithm adopts the traditional definition of Pareto domination. Obviously, the traditional definition can’t solve the problem of Pareto resistance. MOEA/TS algorithm uses the concept of “individual importance degree”. Individual importance degree can solve the problem of Pareto resistance. In terms of diversity optimization, the NSGA-III algorithm uses predefined reference points. The predefined reference points can’t solve the problem that the population can't be evenly distributed on the real PF in the high-dimensional objective space. MOEA/TS algorithm uses the repulsion field method. The repulsion field method can solve the problem that the population can’t be evenly distributed on the real PF in the high-dimensional objective space. In terms of algorithm architecture, the NSGA-III algorithm adopts the serial algorithm architecture. The serial algorithm architecture is difficult to coordinate the convergence optimization and diversity optimization of the algorithm. MOEA/TS algorithm adopts the concurrent algorithm architecture. The concurrent algorithm architecture can coordinate the convergence optimization and diversity optimization of the algorithm. In terms of operators, the NSGA-III algorithm uses the traditional binary crossover operator and polynomial mutation operator. The evolutionary ability of these two operators is weak. MOEA/TS algorithm uses feature extraction operator. Feature extraction operator has strong evolutionary ability. Therefore, the MOEA/TS algorithm has better performance than the NSGA-III algorithm. The comparison results support our conclusion. For the comparison results of these two algorithms, please browse Supplementary Information Document.

Experimental results and analysis

Experimental settings, configuration of experimental software and hardware.

The hardware and software configurations of the experiment are shown in Table 2 . Among them, PlatEMO 30 is a professional many-objective optimization experiment platform. The platform includes multiple test function sets and many-objective optimization algorithms.

Test function

The test functions used in the experiment include: DTLZ test function set (DTLZ1-7), MAF test function set (MAF1-6) and WFG test function set (WFG1-9). Literature 31 describes the characteristics of related test functions. The parameter settings of the related test functions are shown in Table 3 .

Comparison algorithm

In order to verify the performance of MOEA/TS algorithm in the many-objective optimization field, this paper compares MOEA/TS algorithm with 7 advanced many-objective optimization algorithms. These 7 many-objective optimization algorithms include: VMEF 32 , BiGE-BEW 33 , MOEA/DG 34 , MOEA/D 35 , LSMaODE 36 , MaOEA/IT 23 and MaOEA/IGD 37 .

For all test cases, Wilcoxon rank sum test at 5% significance level 38 is used to compare the significance of the difference between the MOEA/TS algorithm and the comparison algorithms. The symbol “+” indicates that the comparison algorithms are significantly better than the MOEA/TS algorithm; the symbol “-“indicates that the comparison algorithms are significantly inferior to the MOEA/TS algorithm. The symbol “=” indicates that there is no significant difference between the MOEA/TS algorithm and the comparison algorithms.

Performance evaluation

In the aspect of performance evaluation, this paper uses inverted generational distance plus (IGD+) and hypervolume (HV) 39 to measure the performance of many-objective optimization algorithm. The smaller the IGD+ value that the algorithm obtains, the better the performance of the algorithm. The larger the HV value that the algorithm obtains, the better the performance of the algorithm.

In order to facilitate observation, we provide the normalized HV value of each algorithm relative to the best HV result. This normalization makes all the results lie in the range [0,1], and 1 represents the best value.

Considering the length of the paper, we only show the IGD+ values of different algorithms in the experiment chapter. For the HV values of different algorithms, please browse the Supplementary Information Document.

Parameter setting

In terms of algorithm parameters, according to some existing parameter research results 13 , 40 , the feature factor \(\mu \) is set to 20 in this paper. According to the parameter sensitivity analysis, the number of high-quality solutions W is set to 9 in this paper. The parameter sensitivity analysis of W is detailed in the subsequent chapters.

The algorithm parameters of the 7 comparison algorithms are determined according to the best parameters provided by the corresponding literature.

Performance comparison under benchmark test functions

Performance comparison under dtlz test function set.

In this paper, each algorithm is executed 30 times to get the average data as shown in Table 4 . As can be seen from Table 4 , MOEA/TS algorithm wins the first place in 15 test cases; BiGE-BEW algorithm wins the first place in 5 test cases; MOEA/D algorithm wins the first place in 15 test cases. In the 35 test cases, the number of MOEA/TS algorithm is significantly superior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 21, 27, 25, 16, 32, 35 and 31, respectively. The number of MOEA/TS algorithm is significantly inferior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 6, 5, 5, 15, 1, 0 and 0, respectively. Statistically, the number of MOEA/TS algorithm is similar to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 8, 3, 5, 4, 2, 0 and 4, respectively. Therefore, in the DTLZ test function set, MOEA/TS algorithm and MOEA/D algorithm have the best performance. The performance of VMEF algorithm, MOEA/DG algorithm, BiGE-BEW algorithm and LSMaODE algorithm decreases in turn. The performance of MaOEA/IGD algorithm and MaOEA/IT algorithm is similar and the worst.

Based on Table 4 , we further analyze the performance of these algorithms. In the DTLZ test function set, MOEA/TS algorithm performs poorly on DTLZ1, DTLZ5 and DTLZ6 test functions. The possible reasons are that the DTLZ1 test function has multiple local optima, and the DTLZ5 and DTLZ6 test functions have a narrow convergence curve. In the DTLZ1 test function, although the repulsion field method of the MOEA/TS algorithm makes the population widely distributed. However, its population distribution isn’t uniform and regular. The population distribution of some algorithms using predefined weight vectors is uniform and regular. In the DTLZ5 and DTLZ6 test functions, the coordination mechanism of MOEA/TS algorithm fails. The narrow convergence curve makes the population more concentrated, but the repulsion field method will disperse the population. The coordination mechanism is difficult to play a role.

The real Pareto front of DTLZ test function set is regular and the function complexity isn’t high. Therefore, algorithms with better diversity may be more popular. MOEA/D algorithm uses predefined weight vectors to maintain diversity and aggregation functions to maintain convergence. Therefore, it has good performance. VMEF algorithm uses different convergence ranking methods to deal with different test problems. Therefore, VMEF algorithm is good in convergence and poor in diversity. Based on the convergence measure and diversity measure, BiGE-BEW algorithm transforms the many-objective optimization problem into a two-objective optimization problem. In theory, the algorithm should perform well. However, there are defects in its convergence and diversity measurement formula. Finally, the experimental results of the algorithm aren’t as good as the expected results. MOEA/DG algorithm still uses the traditional dominance relationship to maintain the convergence of external archives. Therefore, MOEA/DG algorithm is poor in convergence and good in diversity. LSMaODE algorithm divides the population into two subpopulations and uses different strategies to optimize them. Because the real Pareto front of DTLZ test function set isn’t complex, the advantage of this multi-population algorithm architecture isn’t obvious. Therefore, compared with other algorithms, its performance is mediocre. MaOEA/IT algorithm optimizes convergence and diversity through two independent phases. However, the algorithm's performance is always poor because it doesn’t alleviate the contradiction between convergence and diversity. The reference Pareto front of MaOEA/IGD algorithm is poor. Therefore, the algorithm’s performance is always poor.

Performance comparison under MAF test function set

In this paper, each algorithm is executed 30 times to get the average data as shown in Table 5 . As can be seen from Table 5 , MOEA/TS algorithm wins the first place in 10 test cases; BiGE-BEW algorithm wins the first place in 8 test cases; MOEA/DG algorithm wins the first place in 2 test cases; MOEA/D algorithm wins the first place in 5 test cases; LSMaODE algorithm wins the first place in 5 test cases. In the 30 test cases, the number of MOEA/TS algorithm is significantly superior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 22, 18, 25, 21, 20, 27 and 30, respectively. The number of MOEA/TS algorithm is significantly inferior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 6, 11, 2, 5, 9, 1 and 0, respectively. Statistically, the number of MOEA/TS algorithm is similar to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 2, 1, 3, 4, 1, 2 and 0, respectively. Therefore, in the MAF test function set, MOEA/TS algorithm has the best performance. The performance of BiGE-BEW algorithm, LSMaODE algorithm, VMEF algorithm, MOEA/D algorithm, MOEA/DG algorithm and MaOEA/IT algorithm decreases in turn. The performance of MaOEA/IGD algorithm is the worst.

Based on Table 5 , we further analyze the performance of these algorithms. In the MAF test function set, MOEA/TS algorithm performs poorly on MAF2 and MAF3 test functions. The possible reasons are that the MAF2 test function greatly increases the difficulty of convergence on the basis of the DTLZ2 test function, and the MAF3 test function has a convex Pareto front and many local fronts. In the MAF2 test function, although the MOEA/TS algorithm can recognize the advantage and disadvantage of different individuals in the same front layer, the evolutionary efficiency of the MOEA/TS algorithm isn’t ideal. In other words, after the algorithm is finished, the population still has the large evolution potential in convergence. In the MAF3 test function, MOEA/TS algorithm can effectively deal with the convex Pareto front. However, MOEA/TS algorithm is difficult to deal with multiple local fronts because feature extraction operator of MOEA/TS algorithm is difficult to extract features of multiple local fronts.

MAF test function set is the variety of DTLZ test function set. It adds a lot of characteristics to the DTLZ test function set. For example, degenerate, convex, concave, partial, multimodal, deceptive, et al. Therefore, the MAF test function set is more difficult in terms of convergence and diversity. Based on the convergence measure and diversity measure, BiGE-BEW algorithm transforms the many-objective optimization problem into a two-objective optimization problem. Although there are some defects in its diversity and convergence measurement formula, BiGE-BEW algorithm shows good performance in convergence when dealing with more complex MaOPs. VMEF algorithm uses different convergence ranking methods to deal with different test problems. However, the complex Pareto fronts and diversified characteristics still pose a great challenge to VMEF algorithm. Therefore, the performance of VMEF algorithm is mediocre. MOEA/DG algorithm still uses the traditional dominance relationship to maintain the convergence of external archives. Therefore, MOEA/DG algorithm is poor in convergence. MOEA/D algorithm uses predefined weight vectors to maintain diversity and aggregation functions to maintain convergence. MOEA/D algorithm can easily deal with the DTLZ test function set. However, its performance isn’t ideal when dealing with more complex MAF test function set. Surprisingly, LSMaODE algorithm shows good performance. We speculate that the possible reason is that the real Pareto front of the MAF test function set is complex, and then the advantages of multi-population algorithm architecture can be reflected. MaOEA/IT algorithm optimizes convergence and diversity through two independent phases. However, the algorithm’s performance is always poor because it doesn’t alleviate the contradiction between convergence and diversity. The reference Pareto front of MaOEA/IGD algorithm is poor. Therefore, the algorithm’s performance is always poor.

Performance comparison under WFG test function set

In this paper, each algorithm is executed 30 times to get the average data as shown in Table 6 . As can be seen from Table 6 , MOEA/TS algorithm wins the first place in 27 test cases; VMEF algorithm wins the first place in 8 test cases; BiGE-BEW algorithm wins the first place in 6 test cases; MOEA/DG algorithm wins the first place in 1 test case; LSMaODE algorithm wins the first place in 3 test cases. In the 45 test cases, the number of MOEA/TS algorithm is significantly superior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 26, 29, 42, 45, 39, 45 and 43, respectively. The number of MOEA/TS algorithm is significantly inferior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 10, 9, 3, 0, 3, 0 and 0, respectively. Statistically, the number of MOEA/TS algorithm is similar to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 9, 7, 0, 0, 3, 0 and 2, respectively. Therefore, in the WFG test function set, MOEA/TS algorithm has the best performance. The performance of VMEF algorithm, BiGE-BEW algorithm, LSMaODE algorithm and MOEA/DG algorithm decreases in turn. The performance of MaOEA/IGD algorithm, MOEA/D algorithm and MaOEA/IT algorithm is similar and the worst.

Based on Table 6 , we further analyze the performance of these algorithms. MOEA/TS algorithm performs well in all WFG test functions. The possible reason is that the problem characteristics of the WFG test function set are bias, fraud and degradation. The WFG test function set is more difficult than the DTLZ test function set. However, the problem characteristics of the WFG test function set don’t include multiple local fronts (From the previous analysis, we know that MOEA/TS algorithm isn’t good at dealing with multiple local fronts.). MOEA/TS algorithm can deal with these problem characteristics. Therefore, MOEA/TS algorithm performs well in all WFG test functions. It should be noted that the WFG3 test function has a narrow convergence curve, but the performance of MOEA/TS algorithm is still the best. This is an interesting phenomenon. Because from the previous analysis, we know that MOEA/TS algorithm isn’t good at dealing with test functions with narrow convergence curves (such as DTLZ5 and DTLZ6 test functions). Based on the convergence difficulty of the WFG test function set, we speculate that the performance of the other 7 algorithms is worse, thus highlighting the performance of MOEA/TS algorithm.

Compared with the DTLZ test function set, the MAF test function set is more difficult in terms of convergence and diversity. VMEF algorithm uses different convergence ranking methods to deal with different test problems. This approach helps VMEF algorithm to deal with different problem characteristics. Therefore, the performance of VMEF algorithm is good. Based on the convergence measure and diversity measure, BiGE-BEW algorithm transforms the many-objective optimization problem into a two-objective optimization problem. Although there are some defects in its diversity and convergence measurement formula, BiGE-BEW algorithm shows good performance in convergence when dealing with more complex MaOPs. MOEA/DG algorithm still uses the traditional dominance relationship to maintain the convergence of external archives. Therefore, MOEA/DG algorithm is poor in convergence. MOEA/D algorithm uses predefined weight vectors to maintain diversity and aggregation functions to maintain convergence. This approach isn’t suitable for dealing with test functions with bias characteristic. Therefore, the performance of MOEA/D algorithm is the worst. LSMaODE algorithm divides the population into two subpopulations and uses different strategies to optimize them. Because most WFG test functions have bias characteristic, LSMaODE algorithm doesn’t consider the bias problem. Therefore, the performance of LSMaODE algorithm is mediocre. MaOEA/IT algorithm optimizes convergence and diversity through two independent phases. However, the algorithm’s performance is always poor because it doesn’t alleviate the contradiction between convergence and diversity. The reference Pareto front of MaOEA/IGD algorithm is poor. Therefore, the algorithm’s performance is always poor.

Comparison and analysis

By synthesizing Tables 4 , 5 , 6 , we can obtain the data shown in Table 7 . As can be seen from Tables 4 , 5 , 6 , MOEA/TS algorithm wins the first place in 52 test cases; VMEF algorithm wins the first place in 8 test cases; BiGE-BEW algorithm wins the first place in 19 test cases; MOEA/DG algorithm wins the first place in 3 test cases; MOEA/D algorithm wins the first place in 20 test cases; LSMaODE algorithm wins first place in 8 test cases. As can be seen from Table 7 , in the 110 test cases, the number of MOEA/TS algorithm is significantly superior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 69, 74, 92, 82, 91, 107 and 104, respectively. The number of MOEA/TS algorithm is significantly inferior to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 22, 25, 10, 20, 13, 1 and 0, respectively. Statistically, the number of MOEA/TS algorithm is similar to VMEF algorithm, BiGE-BEW algorithm, MOEA/DG algorithm, MOEA/D algorithm, LSMaODE algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm is 19, 11, 8, 8, 6, 2 and 6, respectively. Based on the above data, we can get the following conclusions: MOEA/TS algorithm has the best performance; the performance of BiGE-BEW algorithm, VMEF algorithm, MOEA/D algorithm, LSMaODE algorithm, MOEA/DG algorithm and MaOEA/IT algorithm decreases in turn. MaOEA/IGD algorithm has the worst performance.

In addition to the above conclusions, we can also observe 3 interesting phenomena:

(1) In the MAF test function set and WFG test function set, MOEA/TS algorithm has no competitors. However, in the DTLZ test function set, MOEA/TS algorithm and MOEA/D algorithm are competitors, and they have similar performance. This is because most DTLZ test functions have regular PF, while most MAF test functions and WFG test functions have more complex PF. It can be seen from Sect. " Introduction " that MOEA/D algorithm is suitable for MaOPs with regular PF. Therefore, in the DTLZ test function set, MOEA/D algorithm can compete with MOEA/TS algorithm. In the MAF test functions and WFG test functions, only MOEA/TS algorithm shows excellent performance.

(2) The performance of MOEA/TS algorithm is better on the test cases with 10 objectives, 15 objectives and 20 objectives. The performance of MOEA/TS algorithm is relatively ordinary on the test cases with 5 objectives and 8 objectives. This is because when the number of optimization objectives is small, most many-objective optimization algorithms perform well. Compared with other many-objective optimization algorithms, the advantages of MOEA/TS algorithm aren’t obvious. However, with the increase of the number of optimization objectives, the performance of other many-objective optimization algorithms becomes worse and worse. In contrast, the performance of MOEA/TS algorithm isn’t significantly affected. Therefore, compared with other many-objective optimization algorithms, MOEA/TS algorithm has obvious advantages. This shows that MOEA/TS algorithm is more suitable for solving MaOPs with more than 10 objectives.

(3) Without considering MOEA/TS algorithm, MOEA/D algorithm has the best performance in the DTLZ test function set. BiGE-BEW algorithm has the best performance in the MAF test function set. VMEF algorithm has the best performance in the WFG test function set. This shows that different many-objective optimization algorithms are suitable for different test function sets. However, MOEA/TS algorithm can show excellent performance on three test function sets. This indicates that MOEA/TS algorithm has strong universality and applicability.

Distribution diagram of solutions in the objective space

In order to describe the distribution of solutions in the high-dimensional objective space more intuitively, this paper draws the distribution diagram of solutions in the objective space. Considering the length of the paper, it is unrealistic to show the distribution diagrams of all test functions. Therefore, this section only shows the distribution diagrams of 3 representative test cases. These 3 test cases are DTLZ2 test case with 20 objectives, MAF1 test case with 15 objectives and WFG3 test case with 10 objectives, respectively.

Figure 10 shows the distribution diagrams of each algorithm on DTLZ2 test case with 20 objectives. It can be seen from Fig.  10 that distribution diagrams of MOEA/TS algorithm, BiGE-BEW algorithm, MOEA/DG algorithm and MOEA/D algorithm are similar, which indicates that these 4 algorithms are excellent in convergence and diversity; VMEF algorithm and LSMaODE algorithm are good in diversity, but poor in convergence; MaOEA/IT algorithm and MaOEA/IGD algorithm are very poor in convergence and diversity.

figure 10

Distribution diagrams of each algorithm on DTLZ2 test case with 20 objectives.

Figure 11 shows the distribution diagrams of each algorithm on MAF1 test case with 15 objectives. It can be seen from Fig.  11 that MOEA/TS algorithm and VMEF algorithm are good in convergence, but poor in diversity; BiGE-BEW algorithm and LSMaODE algorithm are good in diversity, but poor in convergence. MOEA/DG algorithm, MOEA/D algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm are very bad in convergence and diversity.

figure 11

Distribution diagrams of each algorithm on MAF1 test case with 15 objectives.

Figure 12 shows the distribution diagrams of each algorithm on WFG3 test case with 10 objectives. It can be seen from Fig.  12 that MOEA/TS algorithm has the best convergence and diversity; LSMaODE algorithm is also excellent, only slightly worse than MOEA/TS algorithm in terms of diversity; BiGE-BEW algorithm and MOEA/DG algorithm are good in diversity, but poor in convergence. VMEF algorithm is good in convergence, but poor in diversity. MOEA/D algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm are very bad in convergence and diversity.

figure 12

Distribution diagrams of each algorithm on WFG3 test case with 10 objectives.

Evolution curve analysis of the algorithm

This section takes DTLZ2 test case with 20 objectives, MAF1 test case with 15 objectives and WFG3 test case with 10 objectives as examples to display the evolution curves of 8 algorithms (as shown in Figs.  13 , 14 , 15 ).

figure 13

Evolution curve of each algorithm on DTLZ2 test case with 20 objectives.

In Figure 13 , in terms of the final IGD+ value of the algorithm, MOEA/TS algorithm has the smallest IGD+ value, while the IGD+ values of MOEA/DG algorithm, BiGE-BEW algorithm, MOEA/D algorithm, LSMaODE algorithm, VMEF algorithm and MaOEA/IGD algorithm successively increase, and MaOEA/IT algorithm has the largest IGD+ value. This shows that MOEA/TS algorithm has the best convergence and diversity within the specified number of iterations. In terms of the evolution of the algorithm, the final IGD+ values of all algorithms are smaller than the initial IGD+ values. This shows that all algorithms have strong evolution ability, especially MOEA/TS algorithm has the strongest evolution ability. In terms of algorithm fluctuation, MaOEA/IT algorithm fluctuates greatly. This shows that MaOEA/IT algorithm isn’t stable. Based on the above analysis, we believe that MOEA/TS algorithm has the best comprehensive performance on DTLZ2 test case with 20 objectives, and is suitable for solving DTLZ2 test problem with 20 objectives.

In Figure 14 , in terms of the final IGD+ value of the algorithm, MOEA/TS algorithm has the smallest IGD+ value, while the IGD+ values of BiGE-BEW algorithm, VMEF algorithm, LSMaODE algorithm, MaOEA/IGD algorithm, MOEA/D algorithm and MOEA/DG algorithm successively increase, and MaOEA/IT algorithm has the largest IGD+ value. This shows that MOEA/TS algorithm has the best convergence and diversity within the specified number of iterations. In terms of the evolution of the algorithm, the final IGD+ values of all algorithms are smaller than the initial IGD+ values. This shows that all algorithms have strong evolution ability, especially MOEA/TS algorithm has the strongest evolution ability. In terms of algorithm fluctuation, MaOEA/IT algorithm fluctuates greatly. This shows that MaOEA/IT algorithm isn’t stable. Based on the above analysis, we believe that MOEA/TS algorithm has the best comprehensive performance on MAF1 test case with 15 objectives, and is suitable for solving MAF1 test problem with 15 objectives.

figure 14

Evolution curve of each algorithm on MAF1 test case with 15 objectives.

In Fig.  15 , in terms of the final IGD+ value of the algorithm, MOEA/TS algorithm has the smallest IGD+ value, while the IGD+ values of LSMaODE algorithm, MOEA/DG algorithm, VMEF algorithm, BiGE-BEW algorithm, MaOEA/IGD algorithm and MOEA/D algorithm successively increase, and MaOEA/IT algorithm has the largest IGD+ value. This shows that MOEA/TS algorithm has the best convergence and diversity within the specified number of iterations. In terms of the evolution of the algorithm, the final IGD+ values of the MaOEA/IT algorithm, VMEF algorithm, MaOEA/IGD algorithm, BiGE-BEW algorithm and VMEF algorithm are all greater than the initial IGD+ values. This shows that the performance of these 5 algorithms deteriorates during evolution, and they aren’t suitable for dealing with WFG3 test problem with 10 objectives. The initial IGD+ value of MOEA/DG algorithm is close to the final IGD+ value, and the IGD+ value of MOEA/DG algorithm fluctuates little during the evolution. This shows that MOEA/DG algorithm is insensitive to evolution. Only the final IGD+ values of LSMaODE algorithm and MOEA/TS algorithm are less than the initial IGD+ values. This shows that LSMaODE algorithm and MOEA/TS algorithm have strong evolution ability, especially MOEA/TS algorithm has the strongest evolution ability. In terms of algorithm fluctuation, MOEA/D algorithm, MaOEA/IT algorithm and MaOEA/IGD algorithm have greater fluctuation. This shows that these 3 algorithms aren’t stable. Based on the above analysis, we believe that MOEA/TS algorithm has the best comprehensive performance on WFG3 test case with 10 objectives, and is suitable for solving WFG3 test problem with 10 objectives.

figure 15

Evolution curve of each algorithm on WFG3 test case with 10 objectives.

In addition, we can also observe an interesting phenomenon from Fig.  13 to Fig.  15 : the IGD+ values of some algorithms sometimes increase significantly with the increase of iterations. That is, the performance of some algorithms sometimes deteriorates seriously with the increase of iterations. The reasons for this phenomenon may include three aspects: (1) The algorithm doesn’t adopt the elite preservation strategy. Some high-quality solutions may gradually disappear; (2) Due to the complexity of the optimization problems, the evolutionary direction of the population may be misled by some pseudo-elite individuals; (3) The convergence optimization and diversity optimization of the algorithm aren’t coordinated. The optimization of convergence may affect the optimization of diversity or the optimization of diversity may affect the optimization of convergence. It can be seen from the pseudo-code of the algorithm in Section 3.5 that the MOEA/TS algorithm proposed in this paper considers the above three aspects. Therefore, MOEA/TS algorithm can effectively alleviate this phenomenon.

Effectiveness verification of innovation part

In order to verify the effectiveness of the innovative parts, 4 variants are designed in this section. As follows:

MOEA/TS-1 algorithm: The feature extraction operator in MOEA/TS algorithm is changed to the binary crossover operator and polynomial mutation operator;

MOEA/TS-2 algorithm: The repulsion field method in MOEA/TS algorithm is removed;

MOEA/TS-3 algorithm: The concurrent architecture in MOEA/TS algorithm is changed to serial architecture;

MOEA/TS-4 algorithm: The individual importance degree in MOEA/TS algorithm is removed.

This paper takes WFG test function set (45 test cases) as samples, and then verifies the performance of 5 algorithms. In this paper, 5 algorithms are executed 30 times to get the average data as shown in Table 8 . As can be seen from Table 8 , MOEA/TS algorithm wins the first place in 24 test cases; MOEA/TS-1 algorithm wins the first place in 13 test cases; MOEA/TS-2 algorithm wins the first place in 7 test cases; MOEA/TS-3 algorithm wins the first place in 1 test case. In the 45 test cases, the number of MOEA/TS algorithm is significantly superior to MOEA/TS-1 algorithm, MOEA/TS-2 algorithm, MOEA/TS-3 algorithm and MOEA/TS-4 algorithm is 21, 30, 40 and 45, respectively. The number of MOEA/TS algorithm is significantly inferior to MOEA/TS-1 algorithm, MOEA/TS-2 algorithm, MOEA/TS-3 algorithm and MOEA/TS-4 algorithm is 11, 6, 0 and 0, respectively. Statistically, the number of MOEA/TS algorithm is similar to MOEA/TS-1 algorithm, MOEA/TS-2 algorithm, MOEA/TS-3 algorithm and MOEA/TS-4 algorithm is 13, 9, 5 and 0, respectively. The average ranking of MOEA/TS algorithm is about 1.64; the average ranking of MOEA/TS-1 algorithm is about 2.02; the average ranking of MOEA/TS-2 algorithm is about 2.62; the average ranking of MOEA/TS-3 algorithm is about 3.71; the average ranking of MOEA/TS-4 algorithm is 5.

Therefore, we think that the 4 innovative parts of MOEA/TS algorithm are necessary and indispensable. The lack of any innovative parts will seriously affect the performance of MOEA/TS algorithm. This shows that our innovations are effective. In addition, based on the above data, we can also find that “individual importance degree” has the greatest influence on the algorithm; the algorithm architecture ranks second; the repulsion field method ranks third; the feature extraction operator ranks fourth.

Ablation experiment of selection approach

In the feature extraction operator, we select W high-quality solutions. To prove the effectiveness of this selection approach over random selection, the ablation experiment will be performed in this sect. " Introduction " variant is designed in this section. As follows:

MOEA/TS-5 algorithm: W solutions are randomly selected in the feature extraction operator.

This paper takes WFG test function set (45 test cases) as samples, and then verifies the performance of 2 algorithms. In this paper, 2 algorithms are executed 30 times to get the average data as shown in Table 9 . As can be seen from Table 9 , MOEA/TS algorithm wins the first place in 45 test cases. In the 45 test cases, the number of MOEA/TS algorithm is significantly superior to MOEA/TS-5 algorithm is 42. The number of MOEA/TS algorithm is significantly inferior to MOEA/TS-5 algorithm is 0. Statistically, the number of MOEA/TS algorithm is similar to MOEA/TS-5 algorithm is 3. Therefore, we believe that the performance of MOEA/TS algorithm is better than MOEA/TS-5 algorithm in the WFG test function set. It proves that the selection approach that we use is better than random selection in the feature extraction operator.

In addition, the performance of MOEA/TS-5 algorithm isn’t as good as that of MOEA/TS-1 algorithm. It means that the performance of the feature extraction operator based on random selection is even worse than that of some classical operators. The possible reason is that the randomly selected solution set will cause the feature extraction operator to extract many bad features. These bad features hinder individual evolution, which makes the convergence maintenance state and diversity maintenance state of MOEA/TS algorithm fail for a long time, and only the coordination state can play some role. The architecture of the MOEA/TS algorithm is undermined by some bad features.

Parameter sensitivity analysis.

The algorithm parameters analyzed in this paper are mainly the number of high-quality solutions W, threshold value T, standard deviation std. Due to the high complexity of the WFG3 test case with 10 objectives, it is difficult for the population of each algorithm to cover the real Pareto front, so this paper considers the WFG3 test case with 10 objectives as the main function of parameter analysis.

The initial value and value range of each parameter are shown in Table 10 .

As shown in Fig.  16 , when \(W<9\) , the IGD + value of the algorithm decreases significantly with the increase of W . It means that when \(W<9\) , the performance of the feature extraction operator is greatly improved with the increase of W . This is because the features extracted by the feature extraction operator are closer to the ideal situation. When \(W=9\) , the IGD + value of the algorithm is minimum. This shows that when \(W=9\) , the feature extraction operator performs best. When \(W>9\) , the IGD + value of the algorithm increases slowly. It means that when \(W>9\) , the performance of the feature extraction operator deteriorates gradually with the increase of W . This is because some features are over-extracted by feature extraction operators. Therefore, for WFG3 test case with 10 objectives, \(W=9\) is the best parameter selection.

figure 16

The corresponding relationship between IGD + value and W.

As shown in Fig.  17 , when \(T<5\%\) , the IGD + value of the algorithm decreases significantly with the increase of T . This is because if the threshold value T is too small, the algorithm will remain in the same state for a long time, and it is difficult to be adjusted to other states. Convergence and diversity of algorithm will also be difficult to balance. This situation will be improved with the increase of T . When \(T=5\%\) , the IGD + value of the algorithm is minimum. This shows that when \(T=5\%\) , the algorithm has the best performance. When \(T>5\%\) , the IGD + value of the algorithm increases gradually with the increase of T . This is because if the threshold value T is too large, the algorithm’s state will be adjusted frequently. Even if the population isn’t stable in one state (convergence, diversity, coordination), the algorithm will also be adjusted to other states. This isn’t conducive to improving the convergence and the diversity of the algorithm. The efficiency of the algorithm will also be affected. Therefore, for WFG3 test case with 10 objectives, \(T=5\%\) is the best parameter selection.

figure 17

The corresponding relationship between IGD + value and T.

As shown in Fig.  18 , when \(std<0.7\) , the IGD + value of the algorithm decreases significantly with the increase of std . This is because if std is too small, the results of Gaussian sampling are too concentrated in the middle region, and the randomness of the sampling vector is weak, which isn’t conducive to the use of features and generation of diversified feature solutions. When \(std=0.7\) , the IGD + value of the algorithm is minimum. This shows that when \(std=0.7\) , the feature extraction operator performs best. When \(std>0.7\) , the IGD + value of the algorithm increases significantly with the increase of std . This is because if the std is too large, the result of Gaussian sampling is too scattered, the randomness of the sampling vector is strong, some components are easy to exceed the upper bound or lower bound, and some features are easy to be eliminated by the repair operator. Therefore, for WFG3 test case with 10 objectives, \(std=0.7\) is the best parameter selection.

figure 18

The corresponding relationship between IGD + value and std.

Based on the above analysis of algorithm parameters, we think \(W=9, T=5\%, std=0.7\) are the best parameter combinations in WFG3 test case with 10 objectives. Further, we test the performance of the above parameter combinations in more test cases. The experimental results show that the above parameter combinations perform well in most test cases. Therefore, this paper sets the number of high-quality solutions \(W\) , the threshold value \(T\) and the standard deviation \(std\) to 9, 5% and 0.7, respectively.

Practical problem testing

This section mainly explores the performance of MOEA/TS algorithm in practical problems. The practical problem selected in this section is the industrial internet optimization problem based on the blockchain provided in reference 40 .

The industrial internet can support effective control of the physical world through a large amount of industrial data, but data security has always been a challenge due to various interconnections and accesses. Blockchain technology supports the security and privacy protection of industrial internet data with its trusted and reliable security mechanism. Fragmentation technology can help improve the overall throughput and scalability of the blockchain network. However, due to the uneven distribution of malicious nodes, the effectiveness of fragmentation is still challenging. In addition, the conflict between multiple industrial network indicators is also a problem we have to consider. Therefore, the industrial internet optimization problem based on blockchain is an important research problem.

In this section, the industrial internet optimization problem based on blockchain has the following 4 optimization objectives:

(1) Minimizing the shard invalidation probability (SIP);

(2) Minimizing the transmission delay (TD);

(3) Maximizing the throughput (TP);

(4) Minimizing the load of Malicious Nodes (LMN).

The research background of the industrial internet based on blockchain and the calculation formulas of these 4 objectives are detailed in reference 40 .

In this section, we set the population size to 220, the number of iterations to 300, and the number of function evaluations to 66000. We still use inverted generational distance plus (IGD+) to measure the performance of many-objective optimization algorithms. However, the real PF of the practical problem is unknown. Therefore, we run these algorithms many times to obtain the different non-dominated solution sets. The non-dominated union set of the different non-dominated solution sets is considered as the real PF. The relevant parameters of these algorithms are shown in Section 4.1.

In this section, each algorithm is executed 30 times to get the data as shown in Table 11 . As can be seen from Table 11 , MOEA/TS algorithm has absolute advantages. The performance of BiGE-BEW algorithm and MOEA/DG algorithm is good and similar. The performance of VMEF algorithm and MOEA/D algorithm in practical problems is obviously not as good as that in benchmark test functions. This is because the real PF of the practical problem is more complex. The performance of LSMaODE algorithm is close to that of MOEA/D algorithm. The performance of MaOEA/IT algorithm and MaOEA/IGD algorithm is the worst. Based on the above observations and analysis, we believe that MOEA/TS algorithm still has excellent performance and strong applicability in practical problems.

Considering that the solutions obtained by the many-objective optimization algorithms are the population, it is unrealistic to compare different network indicators of different algorithms intuitively. However, in practical applications, we only need to make choices according to the specific needs or preferences of users or enterprises. In this section, we first select the individuals with the largest throughput in each algorithm, and then compare the MOEA/TS algorithm with other algorithms on the basis of ensuring the maximum throughput. The network indicators obtained by these 8 algorithms are shown in Table 12 . As can be seen from Table 12 , in terms of SIP and TP, MOEA/TS algorithm has the best performance; In terms of TD, MOEA/TS algorithm ranks second; In terms of LMN, MOEA/TS algorithm ranks third. Therefore, we believe that the MOEA/TS algorithm has the best comprehensive performance in the industrial internet optimization problem based on blockchain, and various network indicators are at the forefront.

Based on the experimental analysis from Section 4.2 to Section 4.8, we can obtain the following conclusions:

(1) In the benchmark test cases, MOEA/TS algorithm is superior to the other 7 advanced many-objective optimization algorithms.

(2) MOEA/TS algorithm is more suitable for dealing with the MaOPs with more than 10 objectives.

(3) MOEA/TS algorithm can show excellent performance in different test function sets, and has strong universality and applicability.

(4) MOEA/TS algorithm has the best convergence and diversity, the strongest evolution ability and the fastest convergence speed.

(5) The 4 innovative parts of MOEA/TS algorithm are necessary and indispensable. The lack of any innovative parts will seriously affect the performance of MOEA/TS algorithm.

(6) MOEA/TS algorithm still has excellent performance and strong applicability in practical problems.

Summary and future work

Aiming at some difficulties in the many-objective optimization field, this paper proposes a many-objective evolutionary algorithm based on three states (MOEA/TS). Firstly, a feature extraction operator is proposed. The feature extraction operator is a feature extractor, which can extract the features of the high-quality solution set, and then assist the evolution of the current individual. Secondly, in terms of convergence maintenance, this paper doesn’t consider using domination relaxation technique, because the current domination relaxation technique still faces some problems. Based on Pareto front layer, this paper proposes the concept of “individual importance degree”. The importance degree of an individual can reflect the importance of the individual in the same Pareto front layer, so as to further distinguish the advantages and disadvantages of different individuals in the same front layer, and effectively solve the phenomenon of Pareto resistance. Then, in terms of diversity maintenance, this paper considers maintaining the diversity of the population in the objective space by repulsion field, so that the population can be evenly distributed on the real PF. Finally, a new concurrent algorithm framework is designed. In the framework, the algorithm is divided into three states, namely, convergence maintenance state, diversity maintenance state and coordination state. Each state focuses on a specific task. That is, the convergence maintenance state is responsible for improving the convergence of population; Diversity maintenance state is responsible for improving the diversity of population; the coordination state is responsible for coordinating the contradiction between diversity and convergence. The population can freely switch among these three states according to its own evolution. The experimental results show that MOEA/TS algorithm is superior to the other 7 advanced many-objective optimization algorithms. In addition, the effectiveness of the innovation parts is further verified.

However, MOEA/TS algorithm also has obvious defects: MOEA/TS algorithm isn’t good at dealing with test problems with narrow convergence curves or multiple local fronts. Therefore, in the future, we will further improve MOEA/TS algorithm, so that MOEA/TS algorithm can deal with test problems with narrow convergence curve or multiple local fronts. In addition, constrained MOPs and high-dimensional MOPs are also the focus of our future research.

Data availability

All data generated or analysed during this study are included in this published article.

Lin, H. F. & Tang, C. P. Analysis and optimization of urban public transport lines based on multiobjective adaptive particle swarm optimization. IEEE Trans. Intell. Transp. Syst. 23 , 16786–16798. https://doi.org/10.1109/TITS.2021.3086808 (2022).

Article   Google Scholar  

Qin, X., Fang, Z. H. & Zhang, Z. X. Multi-objective optimization for production scheduling ofprecast components considering resource constraints. Comput. Int. Manuf. Syst. 27 , 2248–2259. https://doi.org/10.13196/j.cims.2021.08.008 (2021).

Saric, F., Begusic, S., Mercep, A. & Kostanjcar, Z. Statistical arbitrage portfolio construction based on preference relations. Expert Syst. Appl. 238 , 1–12. https://doi.org/10.1016/j.eswa.2023.121906 (2023).

Chen, Y., Zhong, J., Feng, L. & Zhang, J. An adaptive archive-based evolutionary framework for many-task optimization. IEEE Trans. Emerg. Top. Comput. Intell. 4 , 369–384. https://doi.org/10.1109/TETCI.2019.2916051 (2020).

Article   CAS   Google Scholar  

Pradhan, D., Wang, S., Ali, S., Yue, T. & Liaaen, M. CBGA-ES+: A cluster-based genetic algorithm with non-dominated elitist selection for supporting multi-objective test optimization. IEEE Trans. Softw. Eng. 47 , 86–107. https://doi.org/10.1109/TETCI.2019.2916051 (2021).

Bian, H. L., Tian, J., Yu, J. L. & Yu, H. Bayesian co-evolutionary optimization based entropy search for high-dimensional many-objective optimization. Knowl. Based Syst. 274 , 1–13. https://doi.org/10.1016/j.knosys.2023.110630 (2023).

Li, W., Chen, Y. T., Dong, Y. H. & Huang, Y. A solution potential-based adaptation reference vector evolutionary algorithm for many-objective optimization. Swarm Evol. Comput. 84 , 1–15. https://doi.org/10.1016/j.swevo.2023.101451 (2023).

Wang, Y. J., Gao, P. & Chen, Y. An improved farmland fertility algorithm for many-objective optimization problems. Sci. Rep. 12 , 1–24. https://doi.org/10.1038/s41598-022-06329-x (2022).

Khurana, D., Yadav, A. & Sadollah, A. A non-dominated sorting based multi-objective neural network algorithm. MethodsX 10 , 1–16. https://doi.org/10.1016/j.mex.2023.102152 (2023).

Huang, H. J., Zheng, B. F., Wei, X. X., Zhou, Y. Q. & Zhang, Y. D. NSCSO: A novel multi-objective non-dominated sorting chicken swarm optimization algorithm. Sci. Rep. 14 , 1–38. https://doi.org/10.1038/s41598-024-54991-0 (2024).

Li, M. Q., Yang, S. X. & Liu, X. H. Bi-goal evolution for many-objective optimization problems. Artif. Intell. 228 , 45–65. https://doi.org/10.1016/j.artint.2015.06.007 (2015).

Article   MathSciNet   Google Scholar  

Wei, L. S. & Li, E. C. A many-objective evolutionary algorithm with population preprocessing and projection distance-assisted elimination mechanism. J. Comput. Design Eng. 10 , 1988–2018. https://doi.org/10.1093/jcde/qwad088 (2023).

Sun, Y. F., Bian, K., Liu, Z., Sun, X. & Yao, R. X. Adaptive strategies based on differential evolutionary algorithm for many-objective optimization. Discret. Dyn. Nat. Soc. 2021 , 1–17. https://doi.org/10.1155/2021/2491796 (2021).

Zhou, S. Q., Dai, Y. R. & Chen, Z. H. Dominance relation selection and angle-based distribution evaluation for many-objective evolutionary algorithm. Swarm Evol. Comput. 86 , 1–19. https://doi.org/10.1016/j.swevo.2024.101515 (2024).

Wang, X. W., Xie, Z. H., Zhou, X. & Gu, X. S. A two-stage adaptive reference direction guided evolutionary algorithm with modified dominance relation for many-objective optimization. Swarm Evol. Comput. 78 , 1–14. https://doi.org/10.1016/j.swevo.2023.101272 (2023).

Zhang, W., Liu, J. C., Liu, J. H., Liu, Y. C. & Tan, S. B. A dual distance dominance based evolutionary algorithm with selection-replacement operator for many-objective optimization. Expert Syst. Appl. 237 , 1–25. https://doi.org/10.1016/j.eswa.2023.121244 (2023).

Shang, K. & Ishibuchi, H. A new hypervolume-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 24 , 839–852. https://doi.org/10.1109/TEVC.2020.2964705 (2020).

Zhang, W., Liu, J. C., Liu, J. H., Liu, Y. C. & Wang, H. H. A many-objective evolutionary algorithm based on novel fitness estimation and grouping layering. Neural Comput. Appl. 35 , 24283–24314. https://doi.org/10.1007/s00521-023-08950-x (2023).

Nan, Y., Shang, K., Ishibuchi, H. & He, L. J. A Two-stage Hypervolume Contribution Approximation Method Based on R2 Indicator. In: 2021 IEEE Congress on Evolutionary Computation (IEEE CEC 2021) . 2468-2475. https://doi.org/10.1109/CEC45853.2021.9504726 (2021).

Wu, M., Li, K., Kwong, S. & Zhang, Q. Evolutionary many-objective optimization based on adversarial decomposition. IEEE Trans. Cybern. 50 , 753–764. https://doi.org/10.1109/TCYB.2018.2872803 (2020).

Article   PubMed   Google Scholar  

Fan, M. W. et al. Improved multi-objective differential evolution algorithm based on a decomposition strategy for multi-objective optimization problems. Sci. Rep. 12 , 1–14. https://doi.org/10.1038/s41598-022-25440-7 (2022).

Peng, F. A., Lv, L., Chen, W. R. & Wang, J. A projection-based evolutionary algorithm for multi-objective and many-objective optimization. Processes 11 , 1–22. https://doi.org/10.3390/pr11051564 (2023).

Sun, Y., Xue, B., Zhang, M. & Yen, G. G. A new two-stage evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 23 , 748–761. https://doi.org/10.1109/TEVC.2018.2882166 (2019).

Yang, Q. T., Zhan, Z. H., Kwong, S. & Zhang, J. Multiple populations for multiple objectives framework with bias sorting for many-objective optimization. IEEE Trans. Evol. Comput. 27 , 1340–1354. https://doi.org/10.1109/TEVC.2022.3212058 (2023).

Liu, S. C. et al. Many-objective job-shop scheduling: A multiple populations for multiple objectives-based genetic algorithm approach. IEEE Trans. Cybern. 53 , 1460–1474. https://doi.org/10.1109/TCYB.2021.3102642 (2023).

Tian, Y., He, C., Cheng, R. & Zhang, X. A multistage evolutionary algorithm for better diversity preservation in multiobjective optimization. IEEE Trans. Syst. Man Cybern. Syst. 51 , 5880–5894. https://doi.org/10.1109/TSMC.2019.2956288 (2021).

Sun, C. H., Wang, Y. H., Wan, P. & Du, Y. A cooperative spectrum sensing algorithm based on principal component analysis and K-medoids clustering. In: 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC). 835-839. https://doi.org/10.1109/YAC.2018.8406487 (2018).

Osinsky, A., Bychkov, R., Trefilov, M., Lyashev, V. & Ivanov, A. Regularization for cholesky decomposition in massive MIMO detection. IEEE Wirel. Commun. Lett. 12 , 1603–1607. https://doi.org/10.1109/LWC.2023.3284349 (2023).

Gu, Q. H., Gao, S., Li, X. X., Xiong, N. N. & Liu, R. R. An adaptive adjacent maximum distance crossover operator for multi-objective algorithms. Soft Comput. 27 , 7419–7438. https://doi.org/10.1007/s00500-023-07978-4 (2023).

Tian, Y., Cheng, R., Zhang, X. Y. & Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 12 , 73–87. https://doi.org/10.1109/MCI.2017.2742868 (2017).

He, Z., Yen, G. G. & Zhang, J. Fuzzy-based Pareto optimality for many-objective evolutionary algorithms. IEEE Trans. Evol. Comput. 18 , 269–285. https://doi.org/10.1109/TEVC.2013.2258025 (2014).

Qiu, W. et al. Ensemble many-objective optimization algorithm based on voting mechanism. IEEE Trans. Syst. Man Cybern. Syst. 52 , 1716–1730. https://doi.org/10.1109/TSMC.2020.3034180 (2020).

Wang, J. & Chen, H. A Weight Vector Bi-Objective Evolutionary Algorithm with Bi-criterion Evolution for Many-Objective Optimization. In: 2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS). 273-279. https://doi.org/10.1109/ICPICS55264.2022.9873807 (2022).

He, X. & Dai, C. An Improvement Evolutionary Algorithm Based on Decomposition and Grid-based Pareto Dominance for Many-objective Optimization. In: 2022 Global Conference on Robotics, Artificial Intelligence and Information Technology (GCRAIT). 145-149. https://doi.org/10.1109/GCRAIT55928.2022.00039 (2022).

Zhang, Q., Liu, W. & Li, H. The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances. In: 2009 IEEE Congress on Evolutionary Computation (CEC). 203–208. https://doi.org/10.1109/CEC . 2009.4982949 (2009).

Zhang, K., Shen, C. & Yen, G. G. Multipopulation-based differential evolution for large-scale many-objective optimization. IEEE Trans. Cybern. 53 , 7596–7608. https://doi.org/10.1109/TCYB.2022.3178929 (2023).

Sun, Y., Yen, G. G. & Yi, Z. IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans. Evol. Comput. 23 , 173–187. https://doi.org/10.1109/TEVC.2018.2791283 (2019).

Wilcoxon, F. Individual comparisons by ranking methods. Breakthr. Stat. 1 , 196–202. https://doi.org/10.1007/978-1-4612-4380-9_16 (1992).

Ishibuchi, H., Imada, R., Masuyama, N. & Nojima, Y. Comparison of Hypervolume, IGD and IGD+ from the Viewpoint of Optimal Distributions of Solutions. In: 2019 International Conference on Evolutionary Multi-Criterion Optimization (EMO). 332–345. https://doi.org/10.1007/978-3-030-12598-1_27 (2019).

Cai, X. J. et al. A sharding scheme-based many-objective optimization algorithm for enhancing security in blockchain-enabled industrial internet of things. IEEE Trans. Ind. Inform. 17 , 7650–7658. https://doi.org/10.1109/TII.2021.3051607 (2021).

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62362026 and 62162021; in part by the specific research fund of The Innovation Platform for Academicians of Hainan Province under grant YSPTZX202314; in part by the Key Project of Hainan Province under Grant ZDYF2023GXJS158.

National Natural Science Foundation of China, 62362026, 62362026, Specific research fund of The Innovation Platform for Academicians of Hainan Province, YSPTZX202314, YSPTZX202314, Key Project of Hainan Province, ZDYF2023GXJS158, ZDYF2023GXJS158.

Author information

Authors and affiliations.

School of Information and Communication Engineering, Hainan University, Haikou, 570228, China

School of Computer Science and Technology, Hainan University, Haikou, 570228, China

Huanhuan Yu, Hansheng Fei, Xiangdang Huang & Qiuling Yang

Academic Affairs Office, Shandong Agricultural University, Taian, 271018, China

Huijie Zhang

Innovation Platform for Academicians of Hainan Province, Hainan University, Haikou, 570228, China

Jiale Zhao, Huanhuan Yu, Hansheng Fei, Xiangdang Huang & Qiuling Yang

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, J.L.Z., H.J.Z. and X.D.H.; Methodology, J.L.Z., H.J.Z. and X.D.H.; Software, J.L.Z., H.J.Z., H.H.Y. and H.S.F.; Validation, H.H.Y. and H.S.F.; Formal analysis, J.L.Z. and H.J.Z.; Investigation, H.H.Y. and H.S.F.; Resources, Q.L.Y.; Data curation, H.H.Y. and H.S.F.; Writing-original draft, J.L.Z.; Writing-review&editing, J.L.Z., H.J.Z., H.H.Y. and H.S.F.; Visualization, J.L.Z. and H.J.Z.; Supervision, H.H.Y. and H.S.F.; Project administration, J.L.Z. and Q.L.Y.; Funding acquisition, Q.L.Y.; All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Qiuling Yang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhao, J., Zhang, H., Yu, H. et al. A many-objective evolutionary algorithm based on three states for solving many-objective optimization problem. Sci Rep 14 , 19140 (2024). https://doi.org/10.1038/s41598-024-70145-8

Download citation

Received : 26 January 2024

Accepted : 13 August 2024

Published : 19 August 2024

DOI : https://doi.org/10.1038/s41598-024-70145-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

algorithm problem solving example

  • Data Science
  • Trending Now
  • Data Structures
  • System Design
  • Foundational Courses
  • Practice Problem
  • Machine Learning
  • Data Science Using Python
  • Web Development
  • Web Browser
  • Design Patterns
  • Software Development
  • Product Management
  • Programming

Welcome to the daily solving of our PROBLEM OF THE DAY with Jay Dalsaniya We will discuss the entire problem step-by-step and work towards developing an optimized solution. This will not only help you brush up on your concepts of Tree but also build up problem-solving skills. Given a Binary Tree, return Left view of it. Left view of a Binary Tree is set of nodes visible when tree is visited from Left side. The task is to complete the function leftView() , which accepts root of the tree as argument. If no left view is possible, return an empty tree.

Left view of following tree is 1 2 4 8.

          1        /     \      2        3    /     \    /    \   4     5   6    7    \      8   

Input:    1  /  \ 3    2 Output: 1 3

Give the problem a try before going through the video. All the best!!! Problem Link: https://practice.geeksforgeeks.org/problems/left-view-of-binary-tree/1

Video Thumbnail

IMAGES

  1. Problem-solving algorithm

    algorithm problem solving example

  2. Algorithm and Flowchart

    algorithm problem solving example

  3. PPT

    algorithm problem solving example

  4. What is Problem Solving Algorithm?, 4 Steps, Representation

    algorithm problem solving example

  5. computer algorithm science problem solving process with programming

    algorithm problem solving example

  6. DAA 1 7 Fundamentals of Algorithmic problem solving

    algorithm problem solving example

COMMENTS

  1. How to Use Algorithms to Solve Problems?

    End - End the execution. Let's take some examples of algorithms for computer science problems. Example 1. Swap two numbers with a third variable. Step 1: Start. Step 2: Take 2 numbers as input. Step 3: Declare another variable as "temp". Step 4: Store the first variable to "temp". Step 5: Store the second variable to the First variable.

  2. How to Solve an Algorithm Problem?

    2) Break the problem down. Here is a step-by-step explanation of the algorithm in plain English: Convert all uppercase letters in the string to lowercase. This is done so that the case of the letters in the string does not affect the outcome of the comparison. Remove all non-alphanumeric characters from the string.

  3. PDF Principles of Algorithmic Problem Solving

    by Euler, algorithmic problem solving has been a popular intellectual pursuit during the last few thousand years. For a long time, it was a purely mathemati-cal endeavor with algorithms meant to be executed by hand. During the recent decades algorithmic problem solving has evolved. What was mainly a topic of

  4. How to use algorithms to solve everyday problems

    My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms ...

  5. Algorithms

    We've partnered with Dartmouth college professors Tom Cormen and Devin Balkcom to teach introductory computer science algorithms, including searching, sorting, recursion, and graph theory. Learn with a combination of articles, visualizations, quizzes, and coding challenges.

  6. Algorithms Tutorial

    Algorithms Tutorial. Algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In the context of data structures and algorithms, it is a set of well-defined instructions for performing a specific computational task. Algorithms are fundamental to computer science and play a very important role in designing efficient ...

  7. PDF Chapter 3: Algorithmic Problem Solving

    An algorithm, whose characteristics will be discussed later, is a form that embeds the complete logic of the solution. Its formal written version is called a program, or code. Thus, algorithmic problem solving actually comes in two phases: derivation of an algorithm that solves the problem, and conversion of the algorithm into code.

  8. PDF Problem Solving with Algorithms and Data Structures

    of the problem-solving process. Given a problem, a computer scientist's goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are solutions.

  9. 4. Problem Solving and Algorithms

    An algorithm is a plan for solving a problem, but plans come in several levels of detail. It's usually better to start with a high-level algorithm that includes the major part of a solution, but leaves the details until later. ... For example, an algorithm that computes the area of a circle having radius 5.2 meters (formula π*5.2 2) solves a ...

  10. Understanding Algorithms: The Key to Problem-Solving Mastery

    At its core, an algorithm is a systematic, step-by-step procedure or set of rules designed to solve a problem or perform a specific task. It provides clear instructions that, when followed meticulously, lead to the desired outcome. Consider an algorithm to be akin to a recipe for your favorite dish.

  11. What Is an Algorithm?

    An algorithm is a sequence of instructions that a computer must perform to solve a well-defined problem. It essentially defines what the computer needs to do and how to do it. Algorithms can instruct a computer how to perform a calculation, process data, or make a decision. The best way to understand an algorithm is to think of it as a recipe ...

  12. The Algorithm Problem Solving Approach in Psychology

    In psychology, one of these problem-solving approaches is known as an algorithm. While often thought of purely as a mathematical term, the same type of process can be followed in psychology to find the correct answer when solving a problem or making a decision. An algorithm is a defined set of step-by-step procedures that provides the correct ...

  13. 1: Algorithmic Problem Solving

    1.1: Activity 1 - Introduction to Algorithms and Problem Solving. In this learning activity section, the learner will be introduced to algorithms and how to write algorithms to solve tasks faced by learners or everyday problems. Examples of the algorithm are also provided with a specific application to everyday problems that the learner is ...

  14. Algorithm Problem Solving Strategies

    Be Strategic, Think First. Rather than diving in, approach the problem in stages: Think: Analyze the problem. Restate the problem. Write out examples of input and output. Break the problem into its component parts. Outline a solution in psuedo-code. Step through your example data with your psuedo-code.

  15. Problem-Solving: Heuristics and Algorithms

    In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your previous ...

  16. Fundamentals of Algorithmic Problem Solving

    An input to an algorithm specifies an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. (As an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.)

  17. Algorithmic Problem Solving

    Feb 27, 2024. Algorithmic problem-solving is the process of designing and implementing a solution to a problem using a series of steps or rules. These steps are often referred to as an algorithm ...

  18. Algorithmic Thinking Examples in Everyday Life

    Algorithmic thinking is a derivative of computer science and the process to develop code and program applications. This approach automates the problem-solving process by creating a series of systematic, logical steps that intake a defined set of inputs and produce a defined set of outputs based on these. In other words, algorithmic thinking is ...

  19. Khan Academy

    Khan Academy

  20. What is a Greedy Algorithm? Examples of Greedy Algorithms

    Greedy algorithms are a straightforward approach to solving optimization problems, returning a minimum or maximum value. This article explained some examples of greedy algorithms and the approach to tackling each problem. By understanding how a greedy algorithm problems works you can better understand dynamic programming.

  21. 1.1: Activity 1

    By using an example, describe how the concept of algorithms can be well presented to a group of students being introduced to it. 1.1: Activity 1 - Introduction to Algorithms and Problem Solving is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts. In this learning activity section, the learner will be ...

  22. Solve Algorithms

    The true test of problem solving: when one realizes that time and memory aren't infinite.

  23. 6 Real World Algorithm Examples for Students

    6 Examples of Real-World Algorithms. Whether algorithms are used in places that aren't at all surprising, like Google, or in a manual activity that is more unexpected, like brushing your teeth, algorithms play a role in the human experience every single day, Guyon goes on to explain. 1. Sorting Papers. Imagine a teacher sorting their students ...

  24. Problem-Solving Strategies: Definition and 5 Techniques to Try

    In insight problem-solving, the cognitive processes that help you solve a problem happen outside your conscious awareness. 4. Working backward. Working backward is a problem-solving approach often ...

  25. Top Algorithm Interview Questions (2024)

    Let us take a look at two approaches to solve this problem: Using Addition and subtraction: a = a + b; b = a ... interview questions can be easily solved if one has a sound understanding of Algorithms and has gone through a lot of Algorithm Examples and Algorithm MCQs (which we will be covering in the next section of this article). ...

  26. A many-objective evolutionary algorithm based on three states for

    Zhang et al 18 believed that the loss of selection pressure was the core reason for the poor performance of the algorithm. In order to solve this problem, they proposed a many-objective ...

  27. PROBLEM OF THE DAY : 23/08/2024

    Welcome to the daily solving of our PROBLEM OF THE DAY with Jay Dalsaniya We will discuss the entire problem step-by-step and work towards developing an optimized solution. This will not only help you brush up on your concepts of Tree but also build up problem-solving skills. Given a Binary Tree, return Left view of it. Left view of a Binary Tree is set of nodes visible when tree is visited ...