careeraddict favicon

Choosing a Career

  • Feb 2, 2022
  • 12 min read

Sample Questions to Help You Prepare for an Aptitude Test

If you are applying for a job then there's a chance you are going to be asked to take an aptitude test. Our guide full of sample questions is here to help.

Mike Dalley

Mike Dalley

HR and Learning & Development Expert

Reviewed by Hayley Ramsey

person preparing for an aptitude test taking sample questions

Aptitude tests are a common part of applying for jobs, and they’re regularly used to test applicants for educational settings, too. They allow organizations to determine if candidates are the right fit for the role or the course, if they align with organizational values, and the business’ culture. Like them or not, aptitude tests are a fact of life and have many benefits . There are lots of aptitude tests, and recruiters will utilize different kinds of psychometric tests depending on what information they want to deep dive into.

Aptitude tests can be challenging, as they are often time-controlled and, in some cases, points are actively deducted for wrong answers, meaning educated guesses might need to be avoided. Learning about the various types of aptitude tests will ensure you are in the strongest position to excel when taking them. This article covers 17 of the most popular aptitude tests, what they cover and a sample question for each one.

1. Abstract reasoning

Abstract reasoning tests are used when an organization wants to determine your ability to think clearly and process information. These tests use shapes and patterns to assess your problem-solving skills and often involve completing sequences.

Abstract reasoning example

The correct answer is the middle option.

This is because, in the sequence, the pattern turns 45 degrees anti-clockwise and adds a semi-circular line to the pattern, and then turns 45 degrees clockwise and adds a semi-circular line to the pattern in the next entry. This means that the missing pattern in the example needs to turn 45 degrees anti-clockwise and it should have an extra semi-circular line added onto the pattern, meaning that the third option is the next in the sequence.

2. Verbal reasoning

Verbal reasoning tests assess how well you can analyze written information, testing your ability to read and reach conclusions. These tests typically present you with passages of text or patterns of words, asking you to come up with answers, or to find the next steps in word patterns.

Verbal reasoning example

The correct answer is: ‘The degree of avoidance of social media functions’.

The first passage states that you shouldn’t hesitate using all the functions that social networking sites offer, while the second passage advises to be selective in what functions you use.

3. Numerical reasoning

Popular for roles which require internet data or using numerical information, numerical reasoning tests assess how well you understand this type of information. Questions typically present candidates with sets of data and ask them questions based on their interpretation of this information.

Numerical reasoning example

The correct answer is 232.

First, you need to work out how many employees took the test. The pie chart shows that 2.68 (in hundreds) took the test. That means that 2.68 converts to 268 people. The question states that the company has 500 employees, so you need to deduct 268 from 500 to find the number of people who didn’t participate in the survey: 500 – 268 = 232.

4. Spatial reasoning

Common for assessing candidates applying for technical or engineering roles, spatial reasoning tests require candidates to assess movement or patterns associated with 2D or 3D objects. This skill is often referred to as ‘spatial awareness’.

In this question, you have to align each of the three shapes to their corresponding edges (x along x, y along y, and z along z). Then, you are asked which of the shapes in the box the newly aligned shape corresponds to.

Spatial reasoning test

The correct answer is C.

In this example, when you align the three shapes along the matching axes, it will match shape C.

5. Diagrammatic reasoning

Diagrammatic reasoning tests are similar to abstract reasoning but focus more on sequences and application of logic. These questions often present the candidate with a flow diagram or ‘star to end’ sequence, asking them to fill in the gaps based on what they know.

Diagrammatic reasoning example

In this situation, you study the symbols and their effects, linking these with the output on the diagram. Not only have the images been rotated 90 degrees counterclockwise (this effect has been added), the triangle and the arrow have swapped. We see that adding an orange triangle swaps the second and third figure around. Therefore, C is the correct answer.

6. Logical reasoning

Logical reasoning tests are often visual-based and rely on candidates using rules. They measure their ability to draw conclusions, rather than draw on their prior knowledge. As the name suggests, the test is designed to evaluate analytical thinking and logical capabilities.

In this question, we must find the image that comes next in the sequence.

Logical reasoning example

The correct answer is A.

Rationalize how the shapes in the heptagon are moving. The black diamond is moving two points anti-clockwise, as is the white triangle. The white circle is moving four points anti-clockwise. Now that we know the logical next steps, the answer becomes easier to work out.

7. Inductive reasoning

Inductive reasoning tests are similar to abstract and logical reasoning tests. These are used to test how well you draw conclusions. For example, “ If this happened in the past, then this is likely to happen in the future ”. Questions are visual and often feature pattern sequences.

Inductive reasoning example

Here, we analyze the relationships between each box. We can deduce that as the sequence progresses, each shape in the box moves one place anti-clockwise and alternates between black and white. To use the circle as an example, in the missing box it must therefore be in the top right corner, and colored white.

8. Analytical reasoning

A popular test for graduate schemes, analytical reasoning draws upon a wide variety of abilities, involving critical thinking, logic, and finding relationships. Questions are visual in nature and can cover a wide variety of formats. Time controls are often very tight with analytical reasoning tests.

In this question, we must find the missing shape.

Analytical reasoning example

The answer is B.

Analyze the shapes, in terms of what comes before and what comes next. In this example, the shape is staying the same but is decreasing in size. Therefore, the square is the right answer.

9. Mechanical reasoning

Mechanical reasoning tests are used for technical or engineering roles, but unlike spatial reasoning tests, they specifically draw upon mechanical principles, such as laws of motion or physics. Questions often involve gears, pulleys, levers, and so on.

Mechanical reasoning example

The correct answer is A, B, D, F.

Mechanical reasoning tests require you to follow the direction of the wire and work out how it’s moving each wheel. Work out the direction of each wheel in turn.

10. Situational judgement

Situational judgement tests, sometimes called ‘ personality ’ or ‘behavioral’ tests, focus on how you might behave in a certain work-related situation by giving you a scenario and a few different options to choose from, or allow you to rank in order of how likely you are to pursue a certain action. Often, these questions are presented as having ‘no right or wrong answer’ but organizations use these tests to understand how you’ll fit when it comes to how they operate.

Situational judgement example

The correct answer depends on the role, the organization, and its culture.

Often in situational judgement tests, there are no right or wrong answers, but some will be ‘more correct’ than others. It often helps to read up on the working culture of the organization and the role before you take a situational judgement test. In this question, D ticks many of the boxes, but the other three are perfectly acceptable responses as well. In target-driven organizations, C might be a good option, whereas in customer-focused organizations, A or B work well too.

11. Basic numeracy

As the name suggests, basic numeracy tests present you with basic numeracy-related questions to gauge your competence in basic mathematical principles. These tests are often quick-fire to see how swiftly you can come up with the answers, and how many you can get through within a certain time limit.

Basic numeracy aptitude example

The correct answer is 15.

In this question, it is likely that you jump at the answer right away. The reason for this is that there are several ways you might arrive at the answer. Glancing at the clock might instinctively give you 15, as it represents a quarter of an hour. You might also take ’60 minutes’, see that a quarter of the clock is shaded, so work out 60/4 or 60 x 0.25. Finally, you could count the shaded minute markers on the clock, which totals 15.

12. Cognitive ability

These tests assess your ability to think through various principles, such as perception, reading, verbal, mathematical ability, or logic. They often combine features of other aptitude tests, or be a combination of all of them, flicking from text-based questions to diagrammatical ones.

Cognitive ability aptitude example

In this example of a verbal cognitive question, you are looking for a pair of words with a relationship the same as ‘Replete’ and ‘Famished’. Seeing as ‘Replete’ means ‘full’ and ‘Famished’ means ‘very hungry’, the words are opposites, and contradict each other. The only other two words which do the same are ‘Blatant’ (obvious) and ‘Masked’ (hidden or disguised).

13. Basic comprehension

Basic comprehension assesses how well you can read information, process it, and draw conclusions or answer questions based on what you have read. They usually involve reading a passage of text and answering questions based on what the text says (or doesn’t say).

Basic comprehension aptitude example

The correct answer is B.

Reading the paragraphs will demonstrate that the Tyrannosaurus Rex was mis-named ‘The Ferrari of Dinosaurs’ as it was slow. This infers that this nickname was chosen because it was fast. The paragraphs also talk at length about speed and agility more than anything else.

14. Financial reasoning

Financial reasoning tests are very similar to numerical reasoning, but with questions more focused on financial numbers, such as currency, stock measurements and other accounting principles. As you might expect, these tests are favored by financial services companies or consultants.

Financial reasoning example

The correct answer is E.

Here, the share price of Drebs is 40% higher (a 40% increase) from one month ago, or 1.40x (140%) of what it was a month ago. Therefore, take Drebs Ltd’s current share price (18) and divide it by 1.4.

18 / 1.4 = €12.86.

15. Deductive reasoning

Deductive reasoning assesses your ability to draw conclusions based on the facts available. These tests are often used by organizations that value investigative prowess, critical thinking, and logic. Questions present a series of facts — often competing with each other — thus testing your ability to arrive at a singular, correct conclusion.

Deductive reasoning example

By process of elimination, we can work out that Luke is the strongest person. He is stronger than John, and because John is stronger than Mike, and there are only three people, he is the strongest of all. Therefore, we can deduce that Mike is not stronger than Luke, for the above reasons.

16. Clerical aptitude

Clerical tests are used to assess clerical or administrative ability. These are used to assess administrators, secretaries, or executive assistants. In addition to the example question below, these tests might include timed assessments to do with filing information or typing ability.

Clerical aptitude example

Here, administrative professionals would be looking for the sorting of names by surname, as is the norm. The underlined name’s surname is third alphabetically (Rolen, Romero, Ross and Ruben).

17. Non-verbal reasoning

Non-verbal reasoning tests draw upon solving problems or patterns using visual reasoning. They focus on deducting sequences and relationships in sets of images. These are popular tests and can be given to children or adults, often as quick-fire questions designed to measure analytical skills.

Non-verbal reasoning example

With the shape with squiggly lines appearing alternately with the shape with straight ones, we are looking for a shape with straight lines next. The shapes with straight lines have these lines rotating 45 degrees clockwise each time they appear in the sequence. Therefore, the next shape will be A.

Final thoughts

Aptitude tests can be daunting, especially when combined with a job application or something similar. They are there to help gauge your fit to a role or qualification, but this doesn’t mean that failing an aptitude test is always an indication that you are not right for the role. Tests are tests, and it’s challenging to prepare for what we don’t know. Aptitude tests rarely directly correlate to actual job responsibilities, either, although some career assessment and personality tests do.

Getting to know the different types of aptitude tests will ensure you are best placed to predict what might be asked of you in a job application. Knowing about what they cover might also help you think about which career is right for you and will help you prepare when searching for a job . Learning about them will enable you to prepare or revise test questions, to give you the best chance of success possible. Good luck!

Do you have to take an aptitude test as part of the recruitment process? Did you find this useful? Let us know in the comments and share it with your friends!

This is an updated version of an article originally published on 20 June 2017.

Finding a Job

Career Exploration

Aptitude Tests

Take a free tour and try few of our free tests, no strings attached.

Test Publisher Packages

Aptitude Tests

Here's what we found:, ceb/gartner preparation package free, cubiks preparation package free, raven’s progressive matrices preparation package free, preparation package for shl free, abstract reasoning free, antonyms test free, diagrammatic reasoning free, error data checking free, inductive reasoning free, logical reasoning free, mental arithmetic test free, number sequences free, numerical reasoning free, situational judgement test (sjt) free, spatial reasoning free, syllogisms test free, venn diagrams free, verbal analogies free, verbal reasoning free, vocabulary test free, watson glaser critical thinking test free, word problem test free.

Free practice assessment tests and free aptitude tests

Assessments can take place at the employer’s offices, or in a private assessment center and are used to measure an applicant’s true capabilities and characteristics. By helping companies identify the candidates that will most likely perform well on the job, these tests can lead to additional business benefits. On assessment-training.com you can practice several free assessment tests and aptitude tests with explanations. Some examples of our free assessment tests and free aptitude tests that you can practice are: - Raven’s Progressive Matrices Test - SHL Assessment - Watson Glaser Critical Thinking Test - Abstract Reasoning - Numerical Reasoning These free tests are made to help you get started and to give you a better idea of ​​what to expect. On this page, you will find several free practice assessments and aptitude tests. You can also practice intelligence tests and other types of tests on our platform. All these tests are used for selection procedures at companies.

Why practice for an assessment?

Practicing for an assessment is important as you do not want to be faced with any surprises during your assessment. Practicing the assessment components that will be in your assessment ensures that you are well prepared for your assessment. Most assessment tests are often more difficult than you think because the questions you expect can be asked slightly differently. By practicing a lot you are prepared for every question and you increase the chance that you will pass your assessment!

How long in advance should you start practicing?

Preferably as soon as you know that you have to take an assessment! Practicing is very important. Starting a couple of months before your assessment can be crucial for your success. It ensures that you master the thinking process and are familiar with every type of question. Always start practicing at least 30 days in advance! Do you have any questions or do you want more information? Please do not hesitate to contact us.

Are you ready for the next step? Explore our package below:

Job Application Training + Interview Questions + All Tests Package

All Aptitude Tests Package

Assessment Center Exercises

Tests by Employer

Tests by Profession

problem solving aptitude test examples

Welcome to Assessment-Training.com

We are here to help you pass your tests and job interviews. Before you start practicing, please complete your full profile for our Personal Progression System (PPS) to run smoothly. Track your progress, and pass your tests easily. Prepare to succeed!

Aptitude Tests: Test Types & Free Practice Materials (2024)

Aptitude Tests are effective testing instruments to screen potential candidates for the interviewing round. Being one of the most accurate predictors of employees’ performance, the tests are prevalent across various industries: finance, management, or engineering. Through different test types, this pre-hiring tool highlights individuals’ cognitive strengths in every aspect, ensuring recruiters hire the top talent and best match their demands.

To help you get your foot in the door of desirable companies, we will provide a comprehensive overview of aptitude tests in this article: what to expect, free sample questions, and practical tips to ace the exams!

What is an aptitude test?

An aptitude test is a standardized psychometric assessment designed to measure candidates' cognitive abilities and behavioral traits . In most companies, job applicants are often required to take the tests as a screening round for interviews . Some notable providers of aptitude tests are SHL, Aon, Saville, and Kenexa. Typically, an aptitude test comprises various tasks depending on a particular domain the company wants to highlight, namely:

  • Numerical Reasoning Test
  • Verbal Reasoning Test
  • Deductive Reasoning Test
  • Inductive Reasoning Test

Above are the most widely-used test types; however, you may encounter less common assessments based on the specific role you are applying for, including:

  • Estimation Test
  • Spatial Reasoning Test
  • Diagrammatic Reasoning Test
  • Mechanical Reasoning Test
  • Attention Test
  • Memory Test

Note:  Besides the written format, aptitude tests can also use gamified technology, which increases the interaction between test takers and the tests. Some common assessments built under a game-based format are listed in the next part of this article, so stay tuned!

When do aptitude tests take place?

Aptitude tests are often in the second stage of the selection process. Companies use online assessments as a screening tool to pick out potential candidates for the interviewing round. Normally, you are invited to take exams after finishing your application form at the company’s website. Here is a brief introduction to how a recruitment process with aptitude tests will be: Application Form ⇒ Online Tests ⇒ Interview .

problem solving aptitude test examples

Stage 1: Application Form

In this stage, you need to access and sign up on the hiring company’s website,  upload your documents and submit an online application . The company can gain an overview of your background, personality, and motivation through your application form and how these are compatible with the organization’s culture and values.

Stage 2: Online Tests

After completing the application form, you will be directed to online tests that measure candidates’  cognitive abilities and behavioral strengths . These exams are often tightly timed, comprising multiple-choice questions in different areas, such as numerical, verbal, and deductive reasoning. Some companies include situational judgment tests or occupational personality questionnaires to assess candidates’ behavior or problem-solving skills at the workplace.

Stage 3: Interview

The interview round can be administered in the form of  phone screenings or in-person discussions . This stage often comes after online aptitude tests, in which you will give the company a better understanding of your personality, skills, and work experience . At the same time, it is also your chance to better understand your company by asking questions and further discussing.

Depending on a variety of skills and abilities, aptitude tests comprise different types of tests. Besides the common types like numerical, verbal, and deductive reasoning tests prevalent in most industries, there are other assessments customized to the needs of specific roles. The next part of this article will dive deep into each exam, so stay tuned!

Aptitude test - Numerical reasoning test

Numerical reasoning test is a widely-used assessment to measure candidates’ math skills and the ability to interpret numerical data. The tests are prevalent in most test publishers, including some notable names like SHL, Aon, or Saville. The time limit varies from 10 to 30 minutes, allocated for 3 main question types:

Word problem

Data interpretation, calculation.

problem solving aptitude test examples

Word problem and Data Interpretation are the most common question types, as they account for 80% of Numerical Reasoning Tests. The next part will cover deeper insights and examples for each type, so keep moving!

In this question type, you will encounter mathematical questions in written format , from which you have to extract the information and perform arithmetic calculations such as addition, subtraction, multiplication, or division. Here is an example from MConsultingPrep:

problem solving aptitude test examples

Source: MConsultingPrep

Explanation :

The number of people actually deciding to take English courses after liking the fan page is 6,300 × 2% = 126 people.

The revenue from these customers is 126 × 5.3 = 667.8 million VND. The campaign would generate 667.8 - 40 = 627.8 million VND in profit.

So, the correct answer is C.

Data interpretation questions ask you to understand and use graphical and tabular data to calculate a required value . This common question type measures your data analysis and decision-making ability, which is crucial for any business-related role as you regularly face numerical tables and charts. 

problem solving aptitude test examples

If the value of investment increases 10% per annum, what would be the future value of total investment for projects completed in 2019 as on the year 2021? Compound rate is not taken into account.

A. €496M

B. €82M

C. €492M

D. €410M

To calculate the future value of total investment for projects completed in 2019 as on the year 2021, you will need to first calculate the total value of investments in 2019, then the increase of value of investments in 2 years and finally add the two numbers together.

Total value of investments in 2019 = 250 + 160 = €410M

The increase of value of investments in two years (2019 to 2021) = 410 × 10% × 2 = €82M

Required value = 410 + 82 = €492M

Calculation questions consist of basic arithmetic operations , such as addition, subtraction, multiplication, and division. Your job is to perform math calculations using mental math as quickly and accurately as possible , as this question type often does not permit a calculator. Here are some sample questions for you to practice:

QUESTION 1: 20 x 10 - 35 = ?

QUESTION 2: 80 : 5 + 250 = ?

Aptitude test - Verbal reasoning test

Verbal reasoning test is an aptitude test measuring your language comprehension and the ability to process written information. Along with numerical reasoning test, this assessment is developed by most test publishers. However, the format varies among providers, and the duration ranges from 15 to 20 minutes. There are 5 popular types of verbal reasoning tests, namely:

  • Synonym/Antonym 
  • Analogy 
  • Word association 

Making inferences

Reading comprehension.

problem solving aptitude test examples

Each question type aims to assess one domain of your language comprehension and lexicon. Let’s explore their purpose, sample questions, and training materials in the next part of this blog!

Synonym/Antonym

In this question type, you are asked to determine the relationship between two given words , whether they are synonyms, antonyms, or neither. This question aims to measure your vocabulary level, one of the critical areas in Verbal Reasoning Tests. Let’s look at the example below:

When published by test providers, synonym/antonym questions are often under gamified assessments. Game-based questions record your time taken as a factor to measure your overall result.

problem solving aptitude test examples

Source: Test Partnership

Answer : Almost the same (Synonyms)

Explanation : Exit is the action of leaving. So these words are synonyms.

Analogy questions often have two parts; one includes an example of a word pair. From that, you have to select the corresponding answer of a given word to make another pair with a similar relationship. This question type measures not only your lexicon but also the ability to find connections between words.

problem solving aptitude test examples

Explanation : In this example, we can see that DRUM is a type of INSTRUMENT. Therefore, the bridge is Item to Category, and we need to find a word that is the category that DRILL falls into. And out of all the options, we have TOOL.

Word association

This question type asks you to select one or two words that have a different meaning from the rest . Identifying the odd-one-out requires a good understanding of the semantic relationship between words. Test yourself with this example from MConsultingPrep:

Making inferences is one of the most common types of verbal reasoning questions, in which you have to determine whether a given inference derives from the passage by selecting “true”, “false”, or “cannot say”. Besides verbal comprehension, you also need logical ability to reason the information from the text. 

problem solving aptitude test examples

Answer : TRUE

Explanation : It is stated in the second sentence that when played at a low intensity relative to background noise, all three types of sound reduced pain sensitivity in the mice. The adverb "Surprisingly" and "did not expect that" have the same meaning.

In Reading comprehension questions, you have to process information from a given passage and spot relevant details . Typically, you are asked to determine which statement is included/not included in the text or generate the passage's main idea. Below is a sample question from MConsultingPrep:

problem solving aptitude test examples

Explanation : The reasons are listed in the second and third sentences of the passage. However, there is no detail about tobacco smoke making people choke.

Aptitude test - Estimation test

In Estimation Tests, you will have to perform quick and accurate estimations without using a calculator . Craft and technical positions will be the ideal subjects of this test, as estimation skills are crucial. The tests are often tightly timed, or your completion time will be factored into your overall performance.

Let’s look at a gamified question from JobFlare to better understand this type of test:

problem solving aptitude test examples

Source: JobFlare

Answer : Left Greater

Aptitude test - Deductive reasoning test

Deductive reasoning test is a logical thinking assessment requiring you to draw valid inferences from general clues and facts . The main aim of this assessment is to evaluate your ability to make logical deductions for problem-solving. The completion time ranges in various test providers, such as SHL, Aon, or Kenexa, typically around 20-30 minutes. There are 3 popular types of Deductive Reasoning Tests:

  • Ordering and arrangement 

problem solving aptitude test examples

Keep scrolling to dive deeper into each question type!

In Syllogism questions, you are presented with general facts used to verify one or more conclusions . Specifically, your job is to determine which option follows the given statements. Let’s examine this question:

problem solving aptitude test examples

Denote ~T: Tablets ~A: Ample ~L: Laptops ~P: Phones

1. Analyzing the statements:

Some tablets are Ample. ⇒ T ∩ A Some laptops are tablets. ⇒ L ∩ T Every tablet is a phone. ⇒ If one is a Tablet, one must be a Phone ⇒ T ⊂ P

We draw the Euler diagram as in the figure (at the top of the page)

Note : We can only draw the diagram based on what is given from the premises.

2. Check the conclusions:

I . All phones are not laptops. ⇒ If one is a phone, one can not be a laptop ⇒ P ≠ L Based on the diagram, we can see that (L ∩ T ⊂ P) ⇒ (L ∩ P) ⇒ Some phones can be laptops ⇒ Conclusion I does not follow.

II . Some phones are Ample. ⇒ If one is a Laptop, one must be an Ample⇒ P ∩ A Based on the diagram, we can see that A intersects(A ∩ T ⊂ P) ⇒ P and A intersects (A ∩ P) ⇒ Some Phones, which is also Tablet, must be Ample. ⇒ Conclusion II follows.

Ordering and arrangement

In Ordering and arrangement questions, you have to arrange a group of items or people in the correct order by using given hypotheses. Let’s look at an example of this common question type:

problem solving aptitude test examples

From clue (1) and (2), we get two pairs of dishes: [Biscuits-Curries]; [Frankie-Kababs]

From clue (3), we get the position of Pasta: Pasta _ _ _ _ Kababs is neither to the immediate right nor to the immediate left of Pasta, then Kababs can be any position except for the first and second one.

From clue (4), we get the position of Frankie: Pasta _ _ Frankie _

From clue (5) and (1), we get the positions of Biscuits and Curries: Pasta Curries Biscuits Frankie _

The last empty position should be of Kababs.

The final seating arrangement will be as follows (at the top of the page)

Therefore, Biscuits is between Curries and Frankie.

To solve the grouping questions, you need to divide the subjects into different categories based on one or a few sets of conditions . Let’s practice a sample question from MConsultingPrep:

problem solving aptitude test examples

If the hen stays in B, the bear cannot stay, and every other animal must stay in either A or C, depending on the position of the bee colony. If the Hen stays in either A or C, and the bees stay in B, every animal must stay in A or C. If the hen stays in either A or C and the bees If the hen stays in either A or C and the bees don’t stay in B, every animal can stay in B, but the bear. Therefore, in every case, the bear can only stay in either A or C, or not stay at all.

problem solving aptitude test examples

Aptitude test - Inductive reasoning test

Inductive Reasoning Test is a common aptitude test assessing your ability to recognize and interpret the patterns of abstract figures . The tests often last 15-20 minutes, including around 20-30 questions. Some notable test providers of Inductive Reasoning Tests are SHL, Aon, and Saville. Typical questions in Inductive Reasoning Test are:

Figure series

Odd one out.

problem solving aptitude test examples

Let’s take a closer look at each question type together!

In Figure Series, you will be asked to find the missing figure in a sequence by spotting the rule of the series. Let’s take a look at an example of this question type:

problem solving aptitude test examples

Source: “How to pass diagrammatic reasoning tests” by Mike Bryon

Rule 1: The arrow rotates counterclockwise in every step.

Rule 2: The degree of rotation is 360, 180, 90, 45 respectively.

Odd one out questions require you to identify the figure that doesn’t follow the common rule of a sequence . In other words, your job is to examine the group of elements and recognize the governing pattern between them. 

problem solving aptitude test examples

Source: “The Ultimate IQ Test Book” by Philip Carter & Ken Russell

The black dot in figure C is connected to 2 white dots, while black dots in all the others are connected to 3 white dots.

You will be given a grid of items , which requires you to find the missing part based on given patterns . Let’s try out a sample Matrix question to test your abstract reasoning skills:

problem solving aptitude test examples

Divide the matrix into 4 squares. Opposite corner blocks of four squares are identical.

Aptitude test - Spatial reasoning test

Spatial reasoning tests measure candidates’ ability to understand and manipulate 2D or 3D objects by spotting patterns between those shapes. SHL, Kenexa, and Saville are common providers of this assessment, differing from each other in formats and time conditions. 2 types of questions in a spatial reasoning test include:

  • Mental folding 
  • Mental rotation 

Mental folding

Candidates are asked to unfold cubes to find their correct appearance on a transparent sheet. The folded paper must resemble exactly the sides and edges of the original 3D shape. Let’s examine the example below:

problem solving aptitude test examples

Mental rotation

Your job in mental rotation questions is to imagine how a 3D shape is viewed from another perspective and match the original object with the correct presentation of its new orientation. Here is an example of how this question type will be:

problem solving aptitude test examples

Aptitude test - Diagrammatic reasoning test

In the Diagrammatic Reasoning Test, you have to work with diagrams and flowcharts to find the rules governing given operations and apply them to deduce a logical output . The average time limit is about 20 minutes. This test type is not as common as those above; however, some test publishers are providing Diagrammatic Reasoning Tests, such as Aon or Saville.

problem solving aptitude test examples

Source: Saville

The effect of “T” is changing the shading of all figures, so the color of the input has been altered but the sequence order remains the same. Therefore, the input must be in light color.

Aptitude test - Mechanical reasoning test

Mechanical Reasoning Test assesses your ability to apply mechanical knowledge to solve problems . This test is commonly designed for technical jobs, such as engineering or IT positions. Topics often appear in mechanical reasoning tests, including:

  • Material property
  • Fluid dynamics
  • Temperature and heat transfer
  • Pressure and sound energy 
  • Momentum and kinetic energy 

Let’s look at an example of what to expect in this test:

problem solving aptitude test examples

The end of the bar is attached to the handle. Therefore, they move in the same direction.

Aptitude test - Attention test

Attention tests determine whether a candidate can focus on textual details while processing information under time pressure . The tests are often used for roles that require detail-oriented skills, such as technical positions or accounting. 2 common types of questions can be found in an attention test are:

Error checking

Difference spotting.

This test asks you to match a given data to the correct option on the left side. This may sound easy, but samples on the list are often identical and confusing. Let’s look at an example from SHL below:

problem solving aptitude test examples

Source: SHL

You must be familiar with the “Spot-the-Difference” game, where you must identify all the differences between 2 similar images . Difference spotting tests are designed with the same approach, requiring you to compare 2 near-identical photos and determine if they are different or the same. Here’s a sample question for you to try out:

problem solving aptitude test examples

Answer : Same

Aptitude test - Memory test

Memory tests assess the ability to memorize information in a certain period . This test type often comes in game-based tests, in which you have a few seconds to look at a picture or item, then recognize it afterward. Some tests are more taxing, like P&G's Grid challenge, which requires you to simultaneously solve symmetry and rotation problems while keeping track of a dot's location after briefly viewing it. Let's try out this interesting gamified assessment:

problem solving aptitude test examples

Source: P&G

The second task shows up, typically spatial awareness tests. Can you still remember where the highlighted dot locates after this symmetry question?

problem solving aptitude test examples

Answer : No

Practice aptitude tests with MConsultingPrep

Aptitude Tests are a tough ask for most candidates as they measure cognitive abilities beyond any acquired industry knowledge. Also, online screening tests can be highly selective, with the average cut-off rate between 60-80%. However, hard work and regular practice can improve the required skills for the tests.

To help you survive the most-used assessments (Numerical Reasoning, Verbal Reasoning, and Deductive Reasoning Test), MConsultingPrep provides a wide range of simulation exams with thorough explanations and tried-and-true study guides. With our practical training materials, you can master the needed skills for aptitude tests from any test publisher!

Frequently asked questions

What is usually on an aptitude test?

Several sections are used by most positions, such as numerical reasoning, verbal reasoning, or logical reasoning tests . However, specific roles can require other tests customized for their needed criteria: spatial reasoning test, estimation test, or attention test.

What is the passing score for aptitude tests?

There is no fixed benchmark for aptitude tests as it depends on different companies. However, it is commonly acknowledged that you should achieve the 80th percentile, which means you outperform 80% of the candidates.

Note : A percentile rank compares individuals’ performance with other test takers of aptitude tests. Specifically, it is the percentage of candidates with lower performance that ranked score.

Can you fail aptitude tests? 

Yes, it is normal to fail aptitude tests when candidates do not acquire the needed skills for the jobs. Moreover, recruitments are often competitive; only a small percentage of test takers are qualified. If not reaching the passing score, you can still reapply for the role; usually every six-month.

Is the aptitude test oral or written?

Aptitude tests are often conducted on the hiring company’s application website. Online aptitude tests, though remotely administered, are strictly proctored and tightly timed.

Why are aptitude tests so hard?

Aptitude tests focus on candidates’ natural strengths, which cannot be acquired from industry knowledge. Moreover, online tests are often under a strict time limit and cut off a huge number of candidates, making them highly selective and challenging.

How long after aptitude test is the interview?

It takes approximately 2 weeks to one month to receive an announcement about the next round. Typically, the company will contact you through email, which presents your result and details of the interview stage.

How can I clear my aptitude test in one day?

We all know that learning aptitude tests in one day is impossible, as they cover a wide range of test types, measuring various skills and abilities. Therefore, it is crucial to carefully prepare day by day to master the skills required. However, MConsultingPrep can help you learn aptitude tests as fast as possible with our practical guidelines.

Scoring in the McKinsey PSG/Digital Assessment

The scoring mechanism in the McKinsey Digital Assessment

Related product

Thumbnail of Aptitude Test Package

Aptitude Test Package

Simulating most common test publishers, this package provides you with 1400+ numerical, verbal and logical reasoning questions. Ace the aptitude test with our practical study guides tailored to each question type.

The NHS aptitude tests include 5 different assessments: numerical reasoning, verbal reasoning, critical reasoning, and situational judgement tests

JP Morgan aptitude test is a screening tool for their interview. The test is divided into 4 types: numerical, verbal, logical reasoning & situational judgment

While the majority of aptitude tests contain multiple-choice questions, some test providers provide gamified assessments. Dive in the details now!

  • Numerical Reasoning
  • Verbal Reasoning
  • Inductive Reasoning
  • Logical Reasoning
  • Situational Judgement
  • Mechanical Reasoning
  • Watson Glaser Critical thinking
  • Deductive reasoning
  • Abstract reasoning
  • Spatial reasoning
  • Error checking
  • Verbal comprehension
  • Reading comprehension
  • Diagrammatic Reasoning
  • Psychometric tests
  • Personality test
  • In-Tray exercise
  • E-Tray exercise
  • Competency based assessment
  • Game based assessments
  • Analysis exercise
  • Group exercise
  • Presentation exercise
  • Video interview
  • Strengths based assessment
  • Strengths based interviews
  • Saville Assessment
  • Talent Q / Korn Ferry
  • Watson Glaser
  • Criterion Partnership
  • Test Partnership
  • Cut-e / Aon
  • Team Focus PFS
  • Sova Assessment

Logical Reasoning Tests

Practice tests, solutions, and tips to help you pass employers' logical reasoning tests.

  • Buy logical tests
  • Start for free

Updated: 08 April 2024

  • What is a logical reasoning test?

A logical reasoning test is used measure a candidate’s problem solving ability. They assess the ability to come to conclusions based on logic. You are presented with a series of shapes and are required to find patterns and rules to help you find the correct answer. These tests may be encountered for any position at any level of recruitment, but they may be particularly common when recruiting for positions which require significant problem solving ability or higher use of logic.

What is an example of logical reasoning?

Here are screenshots of our logical reasoning tests to understand what an example question involves:

screenshot of logical reasoning test 1

Page contents:

  • How we can help with logical tests
  • Logical reasoning tutorial - Part 1
  • Free logical reasoning test
  • Logical reasoning tutorial - Part 2

Different types of logical reasoning

  • Most common logical reasoning tests
  • Logical reasoning test advice

Logical reasoning FAQs

How assessmentday can help with logical reasoning tests.

AssessmentDay offer numerous types of logical reasoning test which can help you perform to your best in the real thing. Practising logical reasoning tests is an ideal method of preparation as it allows you to learn from your mistakes, improving performance with every practice trial. Similarly experiencing time limits, the test layout and the overall test experience can help ease worries and anxieties about the test by familiarising yourself with them. It goes without saying that a candidate that has undertaken a logical reasoning test numerous times and seen their prior mistakes, and learned from them will be less nervous than a first time test candidate.

Logical Reasoning Video Tutorial - Part 1

problem solving aptitude test examples

Free practice logical reasoning test

Free logical reasoning test.

This free shortened logical reasoning test contains 10 questions and has a time limit of 70 seconds per question .

Logical Reasoning Test 1

  • 12 questions

Logical Reasoning Test 2

Logical reasoning test 3, logical reasoning test 4, logical reasoning video tutorial - part 2.

problem solving aptitude test examples

There are numerous types of logical reasoning test, and many of these are used interchangeably. These tests tend to be similar in their layout and methodology, but with subtle and important differences.

Survey results

We analysed a sample of logic-based tests, to find the most common terms/most popular type was: Inductive reasoning

Here is a breakdown of the most common logical ability tests:

  • Inductive reasoning: - Inductive reasoning is the ability to reach general conclusion based on perceived patterns observed in specific events. Inductive logic is often used in everyday life and is therefore practical to a work place environment. In these tests candidates will be provided with a series of diagrams with an evident pattern. Candidates will need to identify the pattern in the sequence of diagrams and select the next diagram in the sequence.
  • Deductive reasoning: - Deductive reasoning involves a general rule or principle that leads to a specific conclusion. These tests will evaluate and measure a candidate's ability to make logical arguments and draw sound conclusions based on provided data, as well as identify flaws in a piece of information. As a result this is a useful tool in selection procedures as this type of reasoning will be used in the workplace. This type of reasoning will often be used in verbal reasoning tests and numerical tests, and is therefore very likely to be encountered in recruitment processes.
  • Abstract reasoning: - Abstract reasoning, also known as conceptual reasoning measures your lateral thinking ability. In these tests candidates will be tested on their ability to identify relationships, patterns and trends. Candidates will be provided with a series of images that follow a logical sequence or underlying rules. This may include following a rule in a sequence, identifying a code or finding a missing diagram.
  • Diagrammatic reasoning: - Diagrammatic reasoning is a specific form of abstract reasoning. Tests which assess this ability will typically show a flowchart of diagrams and symbols, with an input and an output. Candidates will need to identify which inputs effect diagrams, and therefore generate a specific output based on those rules.
  • Critical thinking: - Critical thinking tests are a type of verbal critical reasoning task which assesses various different types of logical reasoning in arguments, assumptions and conclusions. Typical logical abilities tested include analysing arguments, making inferences and evaluating conclusions.

The most common logical reasoning tests used by employers

Did you know.

Different test publishers use different names for their assessments. The term logical reasoning is used by TalentQ. Other companies may call their test abstract, inductive, or diagrammatic reasoning. It is good advice when being asked to sit a logical reasoning test to speak to the person who invited you and ask for a bit more detail; they may even give you a few example questions so you know what to expect.

Our 2020 study asked candidates about their logical reasoning test experience, in doing so we managed to find the most popular test publishers from our sample:

  • 1. Talent Q Elements Logical Ability - the important feature of these tests is that they are adaptive. That is to say the difficulty of each question is automatically determined by your performance in the previous question. So the questions become more difficult as you progress in order to quickly find your level of logical reasoning ability. There are typically 12 questions to these TalentQ logical tests and a time limit of 75 seconds per question.
  • 2. Kenexa Logical Reasoning - this test published by Kenexa is actually very similar in style to what SHL call an inductive reasoning test. They are effectively the same thing; the candidate is asked to select which diagram fits within the given series from a choice of five options. Typically Kenexa will give the candidate 20 minutes for 24 questions for their logical reasoning test.
  • 3. Ravens Progressive Matricies (Ravens APM / Ravens SPM) - The grid-style of symbols each following a pattern is also used in the Ravens Progressive Matrices assessments. With Raven's logical test, there are two levels of this test: Advanced Progressive Matrices (23 questions, 42 minutes) and Standard Progressive Matrices (28 questions, 47 minutes). Our logical tests are suitable for Raven's APM-III and Raven's SPM tests, you can alter the time limit with of our tests to create a more authentic experience.

Start practising quality tests with a free account

Practice makes perfect

  • Learn from detailed solutions
  • Track your progress

woman sitting next to illustrations of assessmentday platform

General logical reasoning test advice

Although all tests evaluate a specific logical ability, or set of abilities, there are general strategies which can be applied to ensure maximum performance in a logical reasoning test.

Here is a list of useful tips and advice for logical reasoning tests:

  • 1. Stay calm: - Logical reasoning tests of all kinds can be nerve racking, particularly ones which are time limited. As a result it is important to stay calm as to allow optimum performance during your exam. A small amount of anxiety can be a performance booster, maximise focus and therefore performance. However, serious test anxiety can severely hamper performance. Proper practice, enough sleep the night before and deep and regular breathing can all help settle your nerves, and perform to your best on the day of your test.
  • 2. Research the type of test: - Learning as much about the test beforehand can help you dive straight into the test once you have received it, saving you time. Similarly after researching the test, and the logical abilities which it assesses, can help you hone these skills and ensure you demonstrate the particular aptitude required for the test, optimising your performance.
  • 3. Clarify what type of test: - If an employer states that you will need to undertake a logical reasoning test, it is important to gauge what type of logical reasoning will be tested due to the broad nature of logical reasoning. Don’t be afraid to ask for clarification to identify which logical reasoning test will be used, and which logical reasoning skill will be tested as this information will be invaluable for your pre test preparation.
  • 4. Figure out the answer first: - A general tip for logical reasoning tests is to figure out the correct answer/sequence/rule before looking at the multiple choices. This way once you have an idea in your head of the correct answer, you can simply pick it out. If you look at the multiple choice answers first, you will be more inclined to pick the answer which best looks like the correct answer, rather than take the time to evaluate it logically. Your logic will be subject to more bias if you base your answer on which answer seems correct on face value, instead of evaluating it using the logical skills being tested.
  • For more advice on logical reasoning tests, check out our logical reasoning tips where we go through an example question and give you advice on how to pass logical tests.

Yes, logical reasoning is a skill just like numerical reasoning which can be developed and practised. Some people will naturally be talented with logical reasoning and be able to solve logical puzzles much easier than others. Logical reasoning involves being able to solve logic puzzles and draw conclusions from patterns.

Logical reasoning is important for your ability to solve problems and generate creative ideas. It's this reason that many employers use logical reasoning tests in their application process.

The best way to practise logic skills is by using logical reasoning tests. These will provide the best practise as they directly involve all the skills needed in solving logic problems. You can also practise things like word puzzles or any kind of puzzle that requires you to identify patterns to find answers.

Practice4Me

  • AON Hewitt G.A.T.E.
  • PI Cognitive Assessment (PLI Test)
  • Korn Ferry Leadership Assessment
  • Berke Assessment
  • Ergometrics
  • Thomas International
  • Predictive Index (PI)
  • NEO Personality Inventory
  • Leadership Assessment
  • Gallup’s CliftonStrengths
  • Sales Personality Tests
  • Personality Management Tests
  • Saville Wave
  • McQuaig Word Survey
  • Bell Personality Test
  • Myers Briggs Personality Test
  • DISC Personality Test
  • Management SJT
  • Supervisory SJT
  • Administrative SJT
  • Call Center SJT
  • Customer Service SJT
  • Firefighter SJT
  • Numerical Reasoning Tests
  • Verbal Reasoning Tests
  • Logical Reasoning Tests
  • Cognitive Ability Tests
  • Technical Aptitude Tests
  • Spatial Reasoning Tests
  • Abstract Reasoning Test
  • Deductive Reasoning Tests
  • Inductive Reasoning Tests
  • Mechanical Reasoning Tests
  • Diagrammatic Reasoning Tests
  • Fault Finding Aptitude Tests
  • Mathematical Reasoning Tests
  • Critical Thinking Tests
  • Analytical Reasoning Tests
  • Raven’s Progressive Matrices Test
  • Criteria’s CCAT
  • Matrigma Test
  • Air Traffic Controller Test
  • Administrative Assistant Exam
  • Clerical Ability Exam
  • School Secretary Tests
  • State Trooper Exam
  • Probation Officer Exam
  • FBI Entrance Exam
  • Office Assistant Exam
  • Clerk Typist Test
  • Police Records Clerk Exam
  • Canada’s Public Service Exams
  • Firefighter Exams
  • Police Exams
  • Army Aptitude Tests
  • USPS Postal Exams
  • Hiring Process by Professions
  • Recruiting Companies

Select Page

Practice Logical Reasoning Test Example Questions – 2024

Job Aptitude Tests Preparation

  • Logical Tests
  • Free Example Questions

One of the most popular, and perhaps most dreaded, type of psychometric test is the logical reasoning test. These screening questions won’t ask you for formulas or equations. You’ll have to rely solely on your own ingenuity to solve these problems.

You’ll need a great deal of concentration to succeed on a logic test. Logic tests are really designed to assess your intelligence. Similar to I.Q. tests in design, these aptitude assessments test your problem-solving skills, your critical thinking skills, and your creativity.

Below, we’ll explain a little bit more about the logic test questions you can expect on logic pre-employment exams and how you should approach them. We’ll also discuss some of our best tips for logic tests, so make sure to take notes! When you’re done, click over to the second tab and try your hand at our logical reasoning sample questions.

What Is a Logical Reasoning Test?

A logical reasoning test, as opposed to a numerical or verbal reasoning test , requires solely your reasoning ability. While you will have to know how to read, you won’t need to know any grammar, and you certainly won’t need to know how to multiply numbers.

Based on deductive and inductive reasoning, logical thinking questions will take one of two forms. Either you’ll be presented with a series of shapes and asked about the patterns they make, or you’ll be given a series of statements and asked to state what you know to be certain. We’ll go through both of these types of questions.

Why Do I Need to Take Logical Reasoning Tests?

Employers want to know, first and foremost, that you know how to analyze information and learn new skills quickly. These so-called “soft skills” are really far more important to a company than you might imagine, and they’re nearly impossible to really measure in an interview.

Logical questions help employers to see how well applicants recognize patterns, overcome adversity, and concentrate for extended periods of time. The skills you’ll need to pass a logical reasoning test are the same ones that will help you anticipate pitfalls, develop winning strategies, and start new initiatives.

Logical aptitude tests are designed, very simply, to test for intelligence. In fact, you’ll probably see a lot of the same questions on an I.Q. test. As it turns out, intelligence and success are very closely linked. The more intelligent someone is, the more quickly he learns and masters new skills, the better he remembers information told to him, and the more easily he overcomes problems.

How to Answer Logical Reasoning Questions:

Every logical reasoning question is different, and while you should be able to recognize patterns after a while, there are no shortcuts or one-size-fits-all responses. Here we have a few principles you should keep in mind. However, if you find that you’re still struggling with logic, then make sure to check out the free logic examples we have printed in our questions tab.

  • Identify a Major Pattern: Whenever dealing with diagrams, you’ll want to focus on patterns. The series or matrix will be assembled of various sequences, and it’s your job to figure out what they are. Once you’ve identified a major pattern, you’ll want to see if you can also identify a minor pattern. Typically, series and matrices use at least two different patterns.

For example, if Jenny’s coat is both long and blue, we can logically assume that any red or green coats we may find do not belong to Jenny. On the other hand, if Jenny’s coat is either long or blue, we have a different set of criteria.

Logic also makes use of if–>then statements. For example, “If Jenny buys a new coat, she’ll buy one that is long and blue.” In that case, we know that Jenny can only buy a long, blue coat if, in fact, she buys a new coat. If her brother buys a coat for her, she won’t have bought a long, blue coat. These facts may seem redundant if you’ve never studied logic before, but they become quite significant when programming computers, for instance.

Diagrammatic Abstract Reasoning

This non-verbal form of logical reasoning usually involves series or matrices made up of shapes or figures arranged in a certain pattern.

To solve these questions, you’re going to use inductive reasoning. Your goal as the job-seeker is to identify the pattern and complete the task. Here are the four different kinds of tasks you can expect on non-verbal logic test questions.

  • Series In a series question, you’ll be shown 4-6 pictures and asked to choose the next figure in the series from several choices. You might also find that one of the figures in the middle of the series has been left out, and you’ll have to choose which picture best completes the pattern.
  • Matrices Matrices are very similar to series except they extend in two directions. While a series only goes from left to right, a matrix has patterns both horizontally and vertically. Not only will you have to make sure that the figure you choose completes the pattern in its row, but you’ll also have to check to see whether it agrees with the figures above and below it.
  • Odd One Out Sometimes you’ll be given a set of figures and asked to identify the outlier. While the figures won’t be lined up in a series, they will have something in common. It will be your job determine which characteristics are relevant and to group the pictures based on these similarities.
  • A/B Groups In A/B grouping questions, you’ll be given two groups of figures and one figure on its own. You’ll have to decide why the figures were grouped the way they were. You’ll then have to place the single figure in one of the two groups.

Verbal Logical Reasoning

While diagrammatic questions require inductive reasoning, verbal questions call for deductive reasoning. On a verbal question, you’ll be given a series of statements, premises, said to be true, and you’ll have to determine whether the conclusion necessarily follows from those statements.

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal
  • If it rains, the school will cancel the picnic.
  • If the school cancels the picnic, the children will watch a film instead.
  • Therefore, if it rains, the children will watch a film.
  • Either I will go swimming or hiking.
  • I will go swimming.
  • I will not go hiking.
  • Order Other deductive questions will ask you to put a set of people or items in order based on certain descriptions. For instance, they might tell you that “Sam is not last,” or that “Jaimie is before Paul,” but it will be up to you to figure out exactly where they are in line.

Logical Reasoning Test Tips:

Make sure you read our top tips for logical aptitude tests before heading out to the assessment center.

  • Write Everything Down: Logic questions are particularly tricky. Instead of trying to keep everything straight in your head, try to write down the details on a piece of paper. Diagrams can be especially helpful when recording important facts.

For example, if the grass is wet, we can assume it probably rained. Logically, though, we can’t state for certain that it rained if we have no proof. It could have been the gardener who left the sprinklers on overnight.

  • Focus on Truth Values: Make sure you know the difference between words like some, many, and all or words like sometimes, always, and never. These qualifying words can completely change the truth value of a statement.
  • Pay Attention to All Details: When completing diagrammatic tests, be very careful to pay attention to all relevant details. A pattern may be based on multiple dots and lines, and if you rush, you’ll miss subtle aspects of the pattern.

Final Thoughts on Logical Questioning:

While most of us study science and history in school, very few of us ever study formal logic. In fact, unless you went to graduate school for law, engineering, philosophy, or abstract mathematics, logic as a concept in and of itself is probably pretty foreign to you.

If this is the case, then don’t fret. Logic is, not coincidentally, fairly logical. As long as you’re familiar with some of the basic fundamentals, you shouldn’t have too much trouble. Click over to the second tab to prepare with some of our online practice questions. Then read the answer explanations to see whether or not your reasoning was on track.

Free Logical Reasoning Practice Test

Practice4Me’s experts designed an example test for your needs to get you familiarized with various question types and to improve your chances of scoring high. This free test is a printable PDF file that includes questions and answers.

Download our free logical reasoning practice test PDF here .

Free Example Questions to Practice

Logical Reasoning Example Question 1

Questions 4 and 5 deal with the following information:

Given the following premises, state whether the conclusions are true, false, or unknown:

All athletes are coaches, but not all coaches are athletes. All coaches live in Chicago. No students are athletes, but all students are coaches. Some teachers are both athletes and students. Some parents are teachers, but no parents are students or athletes.

Explained Answers:

  • B: Notice how the middle shape alternates between the three dots and the stripes. The figures on either side are in a three-way rotation with a circle, a bow, and a diamond.
  • C: Picture C is the odd picture out because it’s the only one in which the bars don’t dip down below the line.
  • C: Deanna—the order is: Clayton, Billy, Deanna, Annie, Elise

Free Logical Reasoning Test Practice Answer 4

  • B: All students are coaches, but as you can see in the picture, there may be many coaches who are not students. So, the answer is false.

Aptitude Tests

  • Aptitude Tests Guide
  • Numerical Reasoning Test
  • Verbal Reasoning Test
  • Cognitive Ability Test
  • Critical Thinking Test
  • Logical Reasoning Test
  • Spatial Reasoning Test
  • Technical Aptitude Test
  • Inductive Reasoning Test
  • Analytical Reasoning Test
  • Deductive Reasoning Test
  • Mechanical Reasoning Test
  • Non-Verbal Reasoning Tests
  • Diagrammatic Reasoning Test
  • Concentration Assessment Test
  • Finance Reasoning Aptitude Test
  • Fault Finding (Fault Diagnosis) Test
  • Senior Management Aptitude Tests
  • Error Checking Tests
  • In-Basket Exercise

Get 25% off all test packages.

Get 25% off all test packages!

Click below to get 25% off all test packages.

Logical Reasoning Tests

  • 100 questions

Logical reasoning tests are a type of psychometric test used to measure your problem-solving skills. They come in various forms, but all have the underlying purpose of assessing your logical aptitude and your ability to draw conclusions from a given set of information.

What is a logical reasoning test?

A logical reasoning test is an assessment that measures your ability to interpret information, apply logic to solve problems and draw relevant conclusions. It is typically non-verbal and in a multiple-choice format, and requires the use of rules and deduction to reach answers, rather than prior knowledge.

That said, logical reasoning is actually an umbrella term for multiple types of assessment, and you may find you’re asked to take any one of the following five test types as part of a job application.

Deductive reasoning

Commonly presented as a series of word problems, deductive reasoning tests require you to apply top-down-logic; that is, you must draw the right conclusion from a set of given premises.

Typically, you’ll be presented with a short paragraph, or stimulus, detailing an argument, scenario or a number of stated facts, and a set of possible answers. Only one of these answers can be true, based on the evidence provided.

You may also be given a conclusive statement and asked to decide if it is true or false, or if there’s insufficient information to conclude either way.

Inductive reasoning

Unlike deductive reasoning, inductive reasoning tests ask you to make general inferences – probable conclusions based on a set of information, rather than unquestionable outcomes.

This is most often done through the use of shapes, patterns, sequences and diagrams.

You’ll need to quickly identify relationships and rules, then apply these to find the most logical answer from the multiple-choice options. This could be identifying the odd one out, filling in the missing part of a pattern, or finding the next part of a sequence.

Diagrammatic reasoning

Similar to inductive reasoning, diagrammatic reasoning tests offer visual representations of a problem and require you to make logical connections to draw a conclusion.

Questions often take the form of a diagram with inputs and outputs, and you’ll be required to select which processes from a list of operators would achieve the documented effect.

You may also be presented with sets of abstract sequences, given a standalone visual, and asked to select which set it belongs to.

Abstract reasoning

Abstract reasoning tests are essentially inductive and/or diagrammatic reasoning tests under another name.

They too require you to find relationships and rules between visual sequences, then apply these to select the correct image from multiple options, be it a missing part or a continuation of the sequence in question.

Critical reasoning

Critical reasoning tests are more akin to deductive reasoning tests, in that you’ll be dealing with word-based scenarios, arguments, evidence and conclusions.

These tests tend to evaluate a range of skills. Argument analysis is common, in which a question is posed, and a yes/no answer given with a supporting statement. You’ll need to decide whether the statement is a strong or weak argument.

Other question types involve scenarios and statements from which you’ll be asked to make assumptions, deductions and inferences based on the evidence provided.

Critical reasoning tests are most commonly used in sectors where evidence-based judgement is an everyday requirement, such as law.

Why do employers use logical reasoning tests?

As with any form of psychometric assessment, employers use logical reasoning tests as a way to filter applicants, most commonly in the pre-interview stages of selection.

Logic forms a fundamental part of day-to-day decision making. Our reasoning capabilities determine how effectively we interpret the world around us, and how we use what we know to be fact to inform our choices. As such, logical reasoning is a vital part of many job functions.

In administering a logical reasoning test, employers are evaluating how well you’re likely to perform tasks like strategy development, risk assessment and forecasting, as well as general problem solving.

Additionally, the ability to quickly discern patterns, understand complex relationships, and make logical deductions underpins successful innovation and creative problem-solving in dynamic work environments. Thus, logical reasoning tests also serve as a method for assessing a candidate’s potential to contribute to innovative solutions and strategic thinking in their prospective role.

Common logical reasoning test publishers

Below are listed five of the most widely used publishers of logical reasoning tests, each of which has its own approach to this type of assessment.

SHL publishes and administers both inductive and deductive reasoning tests, the lengths of which vary depending on the level of role applied for. Typically though, they last no longer than 25 minutes and follow a standard format.

Kenexa’s logical reasoning test focuses on inductive or abstract reasoning, with candidates required to assess and manipulate shapes and sequences. It also has a deductive reasoning test, which it refers to as verbal reasoning.

Cut-e offers both inductive and deductive reasoning tests, with individual variations of each. The layout of Cut-e’s tests is known to be somewhat different to other publishers, so if you’re taking one be sure to practice specifically for this format.

As one of the best-known publishers of psychometric and aptitude assessments, Saville’s logical reasoning tests are widely used. They’re offered as either abstract or diagrammatic reasoning and have a time limit of around 20 to 25 minutes.

Logical reasoning tests from Talent Q are adaptive, which means the difficulty rating of a question is related to your performance on the question prior. Do well initially, and they’ll get harder. Struggle, and they’ll become a little easier.

How to prepare for logical reasoning tests

The best way to prepare for a logical reasoning test of any description is to train your brain to think more critically – and that means practice.

Try making puzzles a part of your daily routine or use brain-training apps in your downtime. If you’re preparing for a deductive or critical thinking test , take an analytical approach to reading the daily news. Instead of simply taking things on face value, ask yourself questions based on the evidence provided, and whether or not it’s enough to draw solid conclusions.

And make sure you take plenty of practice tests. This will help you understand how to answer logical reasoning tests , and will make you familiar with many of the common relationships found in abstract sequences, including orientation, shading, rotations and reflections.

If you’re struggling to identify relevant rules, work backwards from the answer. The better you understand where and how certain rules apply, the more picking them out will become second nature.

As you progress with your practice tests, start taking them under exam conditions, including setting yourself a time limit. Pacing is a key skill in logical reasoning tests, as your score will not only indicate how many correct answers you gave, but how long it took you to answer each question. By broadening your practice beyond traditional puzzles and tests, you foster a more adaptable and comprehensive critical thinking skill set, better reflecting the dynamic problem-solving required in many professional environments.

Lastly, be sure to practice the right type of test. Ask your prospective employer which of the five types of logical reasoning assessment you’ll be sitting, and if possible, which test provider they use. This will allow you to target your preparation to the specific test format you’ll face on assessment day.

Prepare yourself for leading employers

BBC

Free example logical reasoning questions

Below you’ll find example questions for the different types of logical reasoning test. Answers to each are given below the set of questions.

For further practice, check out our free logical reasoning test questions and answers .

Deductive reasoning test

All footballers are fit and healthy.

All famous sports players are footballers.

Given that the above is true, which of the following is the logical deduction?

  • All footballers are famous sports people
  • All famous people are fit and healthy
  • All famous sports players are fit and healthy
  • All fit and healthy people are footballers
  • All football players are men

Inductive reasoning test

inductive reasoning practice question

How many triangles will be in the 6th shape?

Diagrammatic reasoning test

diagrammatic reasoning practice questions

In the grid, one box is missing. You must work out what rules are being applied in the other boxes in order to work out which of boxes A to F will complete the grid.

Abstract reasoning test

abstract reasoning practice questions

Which of the boxes comes next in the sequence?

Using deductive reasoning, the only logical answer is 3. To get to this answer, you need to simplify the given facts. All famous sports players are footballers, and all footballers are fit and healthy.

  • We can’t deduce that all footballers are famous sports people, as we haven’t got that information.
  • We can’t deduce that all famous people are fit and healthy, because the fact is about famous sports people.
  • This is the logical answer.
  • This information is not given; all footballers are fit and healthy but we can’t logically link that all fit and healthy people are footballers.
  • This is obviously incorrect, as gender is not mentioned at all in the question.

The number of triangles is increasing by 2 as you move along the sequence. I you continue to add 2 until you reach the 6th shape you reach 14, so the answer is C).

In the question the key rule is that the number of ‘star’ shapes in the central column must always equal the number of double circle shapes.

If there are no star shapes there should be no circle shapes. If there are three star shapes, there should be three circle shapes. Option F is the only one that abides by this rule.

Please note: shapes are not in a set position within this sequence. It is merely the presence of the shapes that is important. 1. There are always two squares in the frame. 2. There are always two circles in the frame. 3. There is always one triangle in the frame. So the answer is D).

Sample Logical Reasoning Tests question Test your knowledge!

Question 1

If all roses are flowers and some flowers fade quickly, which statement must be true?

  • All roses fade quickly.
  • Some roses fade quickly.
  • Some flowers are roses.
  • No roses are flowers.

What is the next logical step if when you press button A, light X turns on, and when you press button B, light Y turns on? Assuming button A is pressed and lights X and Y are currently on.

  • Press button B to turn light X off.
  • Press button A to turn light Y off.
  • Press button A to turn light X off.
  • Press button B to turn light Y off.

Choose the statement that best reflects an understanding of the given premises: Premise 1: All managers are employees. Premise 2: Some employees are interns.

  • All managers are interns.
  • Some managers are not employees.
  • Some interns are not managers.
  • No interns are managers.

On a team of four people, two people can write code and three can design UI. If one person has all these skills, how many people only have one of the skills?

In a new brand of cars, Model X has better mileage than Model Y. Model Z has worse mileage than Model Y but is cheaper than Model X. Which of the following statements is correct based on this information?

  • Model Z is the cheapest and has the best mileage.
  • Model X is cheaper than Model Y.
  • Model X has better mileage than Model Z.
  • Model Y is cheaper than both Model X and Model Z.
  • Model Y has the worst mileage.

Start your success journey

Access one of our Logical Reasoning tests for FREE.

After using the platform for two weeks, I’ve never felt more prepared for an Aptitude test.

Ethan used Practice Aptitude Tests to improve his situational judgement scores.

testimonial

Hire better talent

At Neuroworx we help companies build perfect teams

Join picked

Logical Reasoning Tests Tips

1 read each question carefully.

It’s vital you understand exactly what is being asked of you, so be sure to read every question thoroughly. There may well be distractors in the multiple-choice options; picking one of these because you’ve misinterpreted the question is a common error.

2 Analyse the stimulus

In deductive or critical reasoning tests, it’s important to fully digest the stimulus before drawing your conclusion. Again, a simple misunderstanding can be the difference between scoring or missing out on a mark, so make sure you’re aware of all the evidence presented to you.

3 Work out your answer before looking at the options

When working with abstract sequences or patterns, try to get an idea in your head of what the missing piece or next part of the sequence is likely to be, before you look at the multiple-choice options. This will help you zone in on the right response, rather than get distracted by irrelevant choices.

4 Make notes

There may be several relationships in any given sequence, and in diagrammatic reasoning tests you’ll need to be aware of multiple processes. Make notes as you go through to keep track of your thought process. It will help you to work methodically and avoid confusion.

5 Pay attention to pacing

You only have a set amount of time to work through all the questions, so be sure to pace yourself. Typically, problems become more complex as the test progresses, so aim to spend less time on questions at the start. Good pacing takes practice. You want to work quickly but not to the detriment of your accuracy.

6 Don't panic

Logical reasoning tests can be a little daunting if you’re not used to them but remember, we apply logic everyday without even realising it. Stay calm and remind yourself that the steps you need to take are familiar to you, it’s just that the problem you’re solving is presented in an unfamiliar way.

Logical Reasoning Video Tutorials

problem solving aptitude test examples

Mirror Images

problem solving aptitude test examples

Rotated Views

Prepare for your logical reasoning assessments.

Immediate access. Cancel anytime.

  • 20 Aptitude packages
  • 59 Language packages
  • 110 Programming packages
  • 39 Admissions packages
  • 48 Personality packages
  • 315 Employer packages
  • 34 Publisher packages
  • 35 Industry packages
  • Dashboard performance tracking
  • Full solutions and explanations
  • Tips, tricks, guides and resources
  • Access to free tests
  • Basic performance tracking
  • Solutions & explanations
  • Tips and resources

Logical Reasoning Tests FAQs

How are logical reasoning tests scored.

Logical reasoning tests are scored comparatively. That is to say, you’ll receive one mark for each correct answer, and your total score will be compared to the average results of other test-takers. Different employers may assess your results in different ways. Some will look only at your raw score against an average benchmark, while others may also consider your pace.

What are logical reasoning tests used for?

No matter the type of logical reasoning test used, you’re being assessed on your problem-solving and critical thinking skills. Employers are trying to determine if you have the required ability to interpret information, identify patterns and relationships, and draw solid conclusions. These are skills used on a daily basis in many job roles, so logical reasoning tests are widely used.

How is logical thinking measured?

Logical reasoning tests give a good indication of your lateral thinking skills by measuring your ability to analyse and interpret information to make evidence-based decisions – be they inferences, assumptions or unquestionable conclusions.

Why is logical reasoning important?

Logical reasoning is important in work-based environments because it is this skill set that allows you to work through many everyday business problems and come to the right resolution. Logical thinkers make decisions based on what they know to be true, rather than gut feeling; set achievable goals based on past performance; and approach complex problems in a systematic manner.

Where can I practice logical reasoning tests?

You can find practice tests for all types of logical reasoning assessments on our website, along with detailed answer explanations and guides. You can also find practice tests online from individual publishers which will help you get to grips with specific formats and time constraints.

Which employers use logical reasoning tests?

Logical reasoning tests are commonly used for managerial-level roles and above in many corporate job sectors, including law, investment banking and consultancy, as well as human resources, customer service and market research. It’s also likely you’ll be required to sit some form of logical reasoning test for acceptance onto a graduate scheme with many larger employers.

Reviews of our Logical Reasoning tests

What our customers say about our Logical Reasoning tests

Naveen Dabas

June 11, 2024

i think the uniqueness of the shapes define the intelligence in the test.

at first it seemed quite complicated then i think the common sense says that natural intelligence can crack the test.

January 17, 2024

It was a good test. Great variety of questions.

Some questions were too easy ,while some questions took a little longer to be solved. Overall the test was good. Considerations can be made to make the test a little more tough and challenging

Samuel Johnson

United States of America

December 24, 2023

Great service overall

Good question variety - the content of the test and the style differ slightly from the actual Korn Ferry test for which I am preparing.

South Africa

October 23, 2023

Fun & challenging!

I enjoyed the variety that this test offered. I would have preferred instant, question-by-question feedback over feedback at the end.

TheReal MacBen

Philippines

October 14, 2023

The varying patterns of the figures in each box, and what could be the next chain in that pattern.

I like how the test contained fun and interesting questions that needed logical thinking. However, it is not as complex as one test I answered, so the website should give an option of difficulty in tests.

MARTINE METIEKAM

September 26, 2023

Interesting

I have difficulty identifying the sequence. Honestly, I am not very familiar with the test. Thank you.

Andreas Karlsson

September 15, 2023

I found some of the patterns challenging at first but I do love to solve these little puzzles and recognize the patterns within

September 10, 2023

Take one peice at a time

each task was a test to see if you could follow the pattern, some were difficult but it was a nice brain teaser.

September 02, 2023

Quick access to test, without any unnecessary sale propositions

I should not have to create an account to just take a sample test. I am happy to make an account once I take 1 or 2 tests and see whether I want to create an account

Paul Kitchener

United Kingdom

August 29, 2023

Good prep for recruitment test

I liked that I could skip a question and come back to it if I found it difficult under the time limit

By using our website you agree with our Cookie Policy.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Is an Aptitude Test?

How these tests help determine if you might excel at school or work

What Does an Aptitude Test Do?

When you might take an aptitude test, intelligence tests vs. aptitude tests.

An aptitude test is designed to assess what a person is capable of doing or to predict what a person is able to learn or do given the right education and instruction. The goal of an aptitude test is to predict the ability to learn new skills. It represents a person's level of competency to perform a certain type of task.

At a Glance

An aptitude test can help determine your individual ability in a certain area to help predict whether you are likely to succeed. Such tools can help assess student strengths or evaluate a potential job candidate's strengths and weaknesses.

Such tests aren't designed to see how intelligent you are; instead, they assess specific skills or inclinations. Keep reading to learn more about how aptitude tests work, the types of tests you might encounter, and how they differ from intelligence tests.

Aptitude tests are often used to assess academic potential or career suitability and may be used to assess mental or physical talent in a variety of domains. They are based on the idea that people have innate abilities and inclinations that predispose them to succeeding (or struggling) in specific areas.

So, is an intelligence test an example of an aptitude test? Not exactly. The two tests are similar but distinct in what they measure. An intelligence test measures your current cognitive skills ; on the other hand, an aptitude test measures whether you have the potential to develop skills in specific areas in the future.

An aptitude test evaluates your potential to succeed in a certain area by looking at your strengths and weaknesses in particular abilities. It doesn't look at how intelligent you are—it looks at how well you might do on a specific job, task, skill, or subject. That's why you often see aptitude tests used to screen job candidates.

Aptitude Test Examples

People encounter a variety of aptitude tests throughout their personal and professional lives, often starting while they are children going to school.

Here are a few examples of common aptitude tests:

  • An aptitude test assessing an individual's potential for becoming a fighter pilot
  • A career test evaluating a person's capability to work as an air traffic controller
  • An aptitude test given to high school students to determine which type of careers they might be good at
  • A computer programming test to determine how a job candidate might solve different hypothetical problems 
  • An aptitude test designed to assess a person's physical abilities needed for a particular job, such as a police officer or firefighter

Some situations where you might encounter aptitude tests include those given in school or work settings. Some examples of how these might be used include:

Aptitude Tests in School

Students encounter a variety of aptitude tests throughout school as they think about what they might like to study in college or do as a career.

For example, a student might take an aptitude test suggesting they are good with numbers and data. Such results might imply that a career as an accountant, banker, or stockbroker would be a good choice for that student.

Another student might find that they have strong language and verbal skills , suggesting that a career as an English teacher, writer, or journalist might be a good choice.

Researchers suggest that standardized academic aptitude tests predict a variety of important life outcomes.

Special Aptitude Tests

Special aptitude tests are designed to look at an individual's capacity in a particular area. For example, a business that is looking to hire a computer programmer will consider a candidate's work history and interview performance, but they might also want to administer an aptitude test to determine if a person possesses the necessary skill to perform the job.

In this case, the special aptitude test is designed to look at a very narrow range of ability: how skilled and knowledgeable the candidate is at computer programming.

Multiple Aptitude Tests

Multiple aptitude tests are designed to measure two or more different abilities. In some cases, such tests may even resemble intelligence tests in terms of their focus and scope. The Scholastic Assessment Test (SAT) that high school students take during their senior year is a good example of a multiple aptitude test.

The SAT measures aptitudes in areas including math, reasoning, and language and is often used by colleges and universities to determine if an applicant is prepared and has the ability to do well in college.

The Graduate Requisite Exam (GRE), as well as the specialized tests required in order to get into medical (MCAT), law (LSAT), and business graduate programs, are also examples of multiple aptitude tests.

Types of Aptitude Tests

Aptitude tests fall into different categories or types. Different types are given in settings such as schools or workplaces.

Some of the common types of aptitude tests include those focused on specific skills, such as:

  • Verbal reasoning: The capacity for language skills and reading comprehension
  • Inductive reasoning : The ability to infer general ideas and principles based on specific observations
  • Logical reasoning : The ability to use logic to solve problems, spot patterns, and identify relationships
  • Numerical reasoning : Assesses overall numerical aptitude and ability to reason with and manipulate numerical data
  • Abstract reasoning : The capacity for taking general, abstract information and using that information to solve problems and identify patterns
  • Creative and artistic aptitude : The ability to come up with novel ideas and exhibit artistic potential
  • Spatial reasoning : The ability to understand relationships between two- and three-dimensional objects, patterns, and shapes
  • Situational judgment : The capacity for solving problems and making decisions quickly and accurately
  • Mechanical aptitude : The ability to understand mechanical concepts and technical information
  • Error checking : Looks at how well you are able to spot mistakes and errors

So what exactly makes an aptitude test different from an intelligence test? Below, we compare the differences.

Measure general intelligence (the capacity to perform in all areas)

Used by schools

Take the ages of test-takers into consideration

Measure a narrower range of abilities than intelligence tests do

Used by schools and workplaces

Don't take the ages of test-takers into consideration

Intelligence encompasses many different abilities including problem-solving , reasoning, memory, knowledge, and the ability to adapt to a changing environment.

Aptitude tests, on the other hand, are designed to measure a much narrower range of abilities than intelligence or IQ tests do. However, some aptitude tests might have a very narrow focus that limits what they are able to predict. Other tests that look at multiple domains are much more similar to intelligence tests.

Similar to intelligence and aptitude tests are achievement tests , which measure a person's knowledge and skill level in a particular area. Achievement tests tend to focus on what a person has learned as a result of formal learning or training.

So, what are the differences among intelligence, achievement, and aptitude tests? Viewing the three types of tests in terms of a timeline may help you differentiate them:

  • Past : Achievement tests measure what you've already learned or accomplished.
  • Present : Intelligence tests measure the innate cognitive ability you have right now.
  • Future : Aptitude tests aim to uncover where you can potentially apply your skills in the future.

Aptitude tests can help you get an idea of what you are good at or what you might be good at given the right training. However, these tests cannot tell you everything. Consider your results carefully and evaluate other factors, such as your interests and experiences. Then, use this information to explore career options.

A poor score on an aptitude test doesn't mean that you don't have the potential to eventually succeed or even excel in that area, just as a high score on an aptitude test isn't a guarantee of success. Instead, think of this information as a guide for where you might need extra work. With the right support, training, and effort, you can play to your strengths and overcome potential weaknesses.

American Psychological Association. Aptitude test .

Breuer S, Scherndl T, Ortner TM. Effects of response format on achievement and aptitude assessment results: multi-level random effects meta-analyses .  R Soc Open Sci . 2023;10(5):220456. doi:10.1098/rsos.220456

Hadar B, Katzir M, Pumpian S, Karelitz T, Liberman N. Psychological proximity improves reasoning in academic aptitude tests .  NPJ Sci Learn . 2023;8(1):10. doi:10.1038/s41539-023-00158-x

American Psychological Association. Specific ability .

American Psychological Association. Multiple aptitude test .

Vera M, Cortés JA. Emotional and cognitive aptitudes and successful academic performance: Using the ECCT .  Int J Environ Res Public Health . 2021;18(24):13184. doi:10.3390/ijerph182413184

Flensborg-Madsen T, Falgreen Eriksen HL, Mortensen EL. Early life predictors of intelligence in young adulthood and middle age . PLoS ONE. 2020;15(1):e0228144. doi:10.1371/journal.pone.0228144

American Psychological Association. Achievement test .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Quantitiative Aptitude
  • Logical Reasoning
  • Verbal Ability
  • Aptitude Quiz
  • Quantitiative Aptitude Quiz
  • Verbal Ability Quiz
  • Aptitude For Placements
  • Interview Corner
  • Practice Sets

Aptitude Questions and Answers

Aptitude questions can be challenging, but with the right preparation and practice, you can tackle them with ease. Our comprehensive guide to aptitude questions and answers covers all the essential topics of Aptitude, including Quantitative Aptitude , Logical Reasoning , and Verbal Ability . Whether you’re a student preparing for an examination or looking for a job to improve your problem-solving skills. With our step-by-step guide and sample questions, you will gain the confidence to tackle aptitude questions in interviews and competitive exams with ease.

Aptitude: Quantitative Aptitude Topics

Quantitative aptitude covers a wide range of topics and questions including:-

  • Numbers [ Practice ]
  • LCM and HCF [Practice LCM & HCF ]
  • Work and Wages [ Practice ]
  • Pipes and Cisterns [ Practice ]
  • Time Speed Distance [ Practice ]
  • Trains, Boats, and Streams [ Practice ]
  • Percentage [ Practice ]
  • Ratio, Proportion, and Partnership [ Practice ]
  • Mixture and Alligation [ Practice ]
  • Algebra [ Practice ]
  • Average [ Practice ]
  • Age [ Practice ]
  • Profit and Loss [ Practice ]
  • Simple Interest [ Practice ]
  • Compound Interest [ Practice ]
  • Mensuration 2D [ Practice ]
  • Mensuration 3D [ Practice ]
  • Trigonometry & Height and Distances [ Practice ]
  • Progressions [ Practice ]
  • Logarithms [ Practice ]
  • Permutation and Combination [ Practice ]
  • Probability [ Practice ]
  • Geometry [ Practice ]
  • Clocks [ Practice ]
  • Calendars [ Practice ]
  • Coding-Decoding [ Practice ]
  • Race [ Practice ]
  • Simplification and Approximation [ Practice ]
  • Data Interpretation [ Practice ]

     

Aptitude: Logical Reasoning Topics

Logical Reasoning covers a wide range of topics and questions including:-

  • Number Series
  • Letter and Symbol Series
  • Verbal Classification
  • Essential Part
  • Artificial Language
  • Matching Definitions
  • Making Judgments
  • Logical Problems
  • Logical Games
  • Analyzing Arguments
  • Course of Action
  • Statement and Conclusion
  • Theme Detection
  • Cause and Effect
  • Statement and Argument
  • Logical Deduction
  • Letter Series
  • Verification of the Truth of the Statement
  • Coding Decoding
  • Assertion and Reason
  • Statement and Assumptions
  • Logical Venn Diagram

Aptitude: Verbal Ability Topics

Verbal Ability covers a wide range of topics and questions including:-

  • Spotting Errors
  • Selecting Words
  • Sentence Formation
  • Ordering of Words
  • Sentence Correction
  • Sentence Improvement
  • Completing Statements
  • Ordering of Sentences
  • Paragraph Formation
  • Closet Test
  • Comprehension
  • One Word Substitutes
  • Idioms and Phrases
  • Change of Voice
  • Change of Speech
  • Verbal Analogies
  • Preposition

FAQs on Aptitude

Q1: what is aptitude.

The natural ability or potential of a person to learn or perform a specific task or skill is referred to as aptitude. It is often used to describe a person’s inherent talent or capacity in a particular area, such as language, or music.

Q2: How can I improve my aptitude skills?

There are several ways to improve your aptitude skills, including practicing with sample questions and tests, seeking feedback and guidance from experts or mentors.

Q3: What are aptitude tests used for? 

Answer : 

Aptitude tests are used to assess a person’s potential in a particular field or to help identify areas in which a person may excel. They are often used in academic settings, such as college admissions or scholarship applications, and in professional settings, such as job interviews and career assessments.

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

Aptitude Test

For jobseekers.

Practice your skills and earn a certificate of achievement when you score in the top 25%.

For Companies

Test candidates with real-world problems and interview the best ones.

About the test

The Aptitude test assesses the ability to use reason to solve problems which involve rigorous and methodical thinking skills.

The assessment includes work-sample tasks such as:

Understanding numerical data in order to calculate accurate answers.

Analyzing patterns in information and evidence to arrive at correct conclusions.

Evaluating language to summarize information and make the right decisions.

Good analysts, managers, and developers all need to be adept with these analytical, problem-solving, and communication skills.

Sample public questions

A grid of figures

Select the tile below that should be placed in the unknown tile above so that all three rows above follow the same pattern.

The possible answers

Billy never slows down or stops painting no matter how tired he is. In fact, it would take Billy only 4 hours to paint a fence by himself. It would take Suzy 6 hours to paint the same fence by herself.

On Friday, Billy and Suzy worked together to paint the fence and Billy got tired after 2 hours.

How much time did it take to paint the whole fence?

You are having a discussion with your friend about the apps you both use.

Every app your friend uses, you also use. Spreadsheet is the app you use the most. You don't use the Calculator app at all.

With regard to what’s written above, select which of the following statements are true.

Cats are common pets throughout the world, and their worldwide population exceeds 500 million. Cats are the second most popular pet in the U.S. by number of pets owned, behind freshwater fish. Although cat ownership has commonly been associated with women, research has shown that men and women in the U.S. are equally likely to own a cat. They are ranked as the third most popular pet in the U.K. by number of pets owned, after fish and dogs, with 8 million being owned.

What can be concluded from the text above?

The charts below show the number of cars John sold last year and the profit per car he made.

A chart showing the number of cars sold and the profit per car.

How much did John earn last year?

A chart showing the number of cars sold and the profit per car.

The charts above show the number of cars that John sold last year, and the profit per car. In which quarter did John make the highest average profit per car sold?

In which year did the company make the largest profit per employee?

A chart showing the profit per employee.

In college, Jill taught Sam different math theorems.

After leaving college, Jill forgot more math theorems than Sam learned from her.

Based only on the above, which of the below must be true statements.

In the summer, Agathe wore an ill-fitting sun hat and oversized sunglasses to the beach in Greece. Both slipped off when she fell asleep in her swimsuit on the beach at 2 p.m.

What will most likely happen to Agathe?

John has a bag of apples, and two brothers, Jack and Jim.

The image below shows the number of apples that John has eaten and put in the family's compost barrel.

Apples Compost Heap

John then splits the remainder equally between himself and each one of his brothers. Jack and Jim both eat half the apples they have been given. Together the three brothers have 4 apples left.

How many apples did John start with?

When Spring began, a bottled water startup launched its product. When Summer began, it started a 6 month marketing campaign to increase sales. Sales rose and stayed high but, when Summer ended, they fell and stayed low. When Winter began, the startup failed and went out of business.

A grid of circles

In an upcoming referendum, voters will be asked whether the minimum wage should be increased. Telephone polls of likely voters were conducted to predict what the result might be. All respondents were put into at least one of six categories based on profession and age: scientist, lawyer, hourly wage earner, small business owner, younger voter, older voter. The opinion polls showed: A majority of voters are in favor of keeping the current minimum wage. A majority of scientists and lawyers supported a rise in the minimum wage. Younger voters are more inclined to support a rise in the minimum wage. Older voters tend to support no change. Hourly wage earners overwhelmingly supported a rise in the minimum wage. Small business owners were evenly split on the subject.

Select all the statements that can be concluded from the above text:

After saving for years, John's parents bought him his first car for his birthday. He was so happy that they decided to keep buying him cars on future birthdays that came after the same interval of years.

Today they bought him yet another car—38 years after the first.

Here are the birthday cars.

Cars

How old was John when he got his first car from his parents?

A grid of circles

Germany's economy avoided falling into recession during the final three months of last year. This means that it avoided two consecutive quarters of negative GDP growth, which is the definition of a recession. This was a small, but positive, surprise for all analysts who, after the July-to-September period that featured a 0.3% decline, predicted the continuation of this negative trend. Reasons for slower growth last year include a slowdown in the global economy and a weaker car sector, with some German consumers less willing to buy new cars amid confusion over new emission standards. Joe Johnson, senior financial analyst, told the BBC that US tariffs on EU car exports, which US President Donald Trump has threatened, could have a major impact on Germany. He thinks that, if this happens, Germany might fall into recession.

An eagerly awaited new album has been leaked several hours before its official release. Listeners are now able to download the album for free.

How will this affect album sales?

Dan put a large bet on a horse. However, a day before the race, the horse was injured.

Select which of the following statements are true:

The company's sick leave policy says:

This company’s sick leave policy applies to all our employees who have been with our company for more than six months. Our employees can take sick leave only when they want to recover from a sudden illness, accident, or injury. They can use up to 10 days of sick leave for these purposes per calendar year. Upon completion of each 12 month period of employment, employees will receive 3 additional days of sick leave for every completed 12 month period of working for the company. Unspent additional sick leave days cannot be transferred to the next 12 month period. For example, an employee who has worked for the company for 10 years would receive 30 additional days of sick leave, or 40 in total. Keep in mind that employees who become sick should either use their sick days or work from home to avoid spreading illnesses.

Fill in the blanks, with numbers, for the cases below. Enter the number 0 for cases when an employee doesn't have the right to sick leave, according to the policy.

  • Emily, who has worked for us for almost half a year and still hasn't used any sick leave days, has the right to use up to __ sick leave days this year.
  • Faith, who just started her 3rd year of working for us and has not taken sick leave until now, had a car accident. She has the right to use up to __ days of sick leave to recover from her injuries.
  • Vanessa, who started to work for us a little over a year ago, took 2 days of sick leave last year. She was Faith's passenger in the car accident. Vanessa has the right to use up to __ days of sick leave to recover from her injuries.
  • Edith, who began working for us 8 months ago, has the right to use up to __ sick leave days for a volunteer adult care program at a local hospital that lasts the whole day.
  • If Gabrielle, who has worked for us for 20 months, and who took a 10-day sick leave immediately after her 6-month trial period had finished, gets ill tomorrow, she will have the right to use up to __  days of sick leave.

It has rained continuously for 15 days. Tomorrow, an important football match is being held in an outdoor stadium.

Which product line saw the largest absolute increase in income (dollar amount, not percentage) in the second half of the year compared to the first half of the year?

Product line Quarters
Q1 Q2 Q3 Q4
C# $26,000 $27,000 $33,000 $15,000
JavaScript $20,000 $25,000 $30,000 $18,000
HTML/CSS $1,000 $5,000 $7,000 $1,000
PHP $12,000 $11,000 $14,000 $13,000
Ruby $4,000 $4,000 $5,000 $6,000

Five types of tires were tested in three different driving conditions.

Consider the following table of their test scores:

Tire Type Driving Condition Test Scores
Dry Wet Snow
Desert 10 4 1
Beach 5 8 5
Mountain 6 9 6
Swamp 7 10 4
Jungle 7 5 6

If a tire type scored 4 or less in any category, it failed the test.

What is the highest average score of the tire types that passed the test?

Carefully read the following excerpt from an article on paper production:

With the recovery rate of used paper for recycling approaching 70 percent in the United States and Europe, and approaching 80 percent in Japan, to keep up with our demand for paper we need to continue using fresh fiber as well as recycled, according to the premise set forward in a new report by the World Business Council for Sustainable Development (WBCSD). Detailing the reasons why fresh fiber combined with recycled is important for a single integrated wood fiber system, the report examines the “complementarity” of using both and discusses the functions of different types of fibers and the issues related to both recycling old fiber and sourcing fresh fiber. In 2012, 400 million tons of paper and paperboard were produced and consumed globally, which is double that in 1985, notes the report. As the population continues to expand and standards of living increase, this number is expected to climb by another 40 percent by 2028. While many would advocate for cutting down on paper use in the first place, in the face of global demand the most sustainable fiber may have to be the next best thing. But therein lies the rub: finding adequate quantities of said fiber will be the challenge of the global pulp and paper industry.

Which statement most effectively summarizes the text?

An author writes an argumentative essay to persuade readers to agree with a claim about a topic. When writing an argumentative essay, it's important to establish credibility with readers to convince them that the author is trustworthy. True statements, accurate evidence, and clear logic increase an author's credibility. However, false statements, inaccurate evidence, and unclear logic make an author less credible. With lower credibility, an author is less likely to persuade readers to agree with a claim--even when it is trustworthy.

Select the statement that most effectively summarizes the above text:

A postcard and a stamp together cost $1.50. The postcard costs one dollar more than the stamp.

How much does the stamp cost?

A brother and sister own equal parts in a company. The minority shareholders have the remaining 12,000 shares or 30% of the company.

What is the number of shares that the sister owns?

For jobseekers: get certified

Earn a free certificate by achieving top 25% on the Aptitude test with public questions.

Sample silver certificate

Sunshine Caprio

For companies: premium questions

Buy TestDome to access premium questions that can't be practiced. Get money back if you find any premium question answered online.

87 more premium Aptitude questions

Life Expectancy , Workplace , Math Courses , Blue-collar Workers , Candidate Answers , Road Trip , Kindergarten , Penny Flipping , World Championship , All-Stars , Masked Burglar , Free Calls , Broken Clock , Digits to Employees , Pages Turned , Olive Oil Press , Discount , Gear Wheels , Cooking Oil , Vacation Days , Break , Thief , Alarm System , Condominium , Working Week , Fossil Dig , Bikers , Elves , Mold , Laptop Setup , Gas Price , Gas Price Change , Busy Intersection , Fruit Juice Processing , Riviera Hotels , Uber , Renewable Energy , Flower Exports , Lucky Roller , Customer Parking , Digital Ads , Holidays , Tax , Money , Arithmetic Dots 3 , Rotating Figures 1 , Rotating Figures 2 , Rotating Figures 3 , Spot the Duplicates 2 , Red Black Chart , Clock Hands , Closed Deals , Currency Exchange , Ears , Marbles , Pages , Profit Margin , Profits , Race Track , Revenue Growth , Revenue No Growth , Snack , Spin the Wheel , Traffic Counter , Turtles , Middle Ages Warlords , Football Stadium Renovation , Exclude the Duplicates 1 , Spot the Duplicates 4 , Arithmetic Dots 2 , Traffic Tickets , Business Reports , Feature Usage , Combining Figures 3 , Plans , Clock Angles , Outdoor Tub , Exclude the Duplicates 3 , Spot the Duplicates 3 , Combining Figures 4 , Combining Figures 1 , Exclude the Duplicates 2 , Arithmetic Dots 1 , Employees to Digits , Elementary School , Game Rules , Drawing the Next Card .

Skills and topics tested

  • Verbal Reasoning
  • Inductive Reasoning
  • Logical Reasoning
  • Deductive Reasoning
  • Fallacy of the Undistributed Middle
  • Fallacy of Exclusive Premises
  • Fallacy of Division
  • Fallacy of Composition
  • Gambler's Fallacy
  • Affirming a Disjunct
  • Masked-Man Fallacy
  • Numerical Reasoning
  • Table Lookup
  • Divide and Conquer
  • Linear Equations
  • Understanding Rules
  • Working with Time
  • Argument from Fallacy
  • Affirming the Consequent
  • Double Counting
  • Illicit Major
  • Attribute Substitution
  • Percentages
  • Conjunction Fallacy
  • Critical Thinking
  • Most Effective Summary
  • True Statement
  • Existential Fallacy
  • Modal Fallacy
  • Newspaper Excerpts
  • Abstract Reasoning
  • Numeric Representation
  • Three Horizontal Rows
  • Rotating Elements
  • Duplicate Elements
  • Correlation and Causation
  • Arithmetic Mean
  • Chart Lookup
  • Exclusive Elements
  • Venn Diagram
  • Additive Elements
  • Circular Reasoning
  • Dependent Events
  • Probability

For job roles

  • Administrative Assistant
  • Call Center Agent
  • Content Writer
  • Customer Support
  • Data Analyst
  • Financial Analyst
  • Financial Manager
  • Project Manager
  • Sales Manager
  • Software Developer

Sample candidate report

Candidate report sample screenshot

Need it fast? AI-crafted tests for your job role

TestDome generates custom tests tailored to the specific skills you need for your job role. Sign up now to try it out and see how AI can streamline your hiring process!

What others say

Simple, straight-forward technical testing

TestDome is simple, provides a reasonable (though not extensive) battery of tests to choose from, and doesn't take the candidate an inordinate amount of time. It also simulates working pressure with the time limits.

Jan Opperman, Grindrod Bank

Product reviews

Capterra - Shortlist (2023)

Solve all your skill testing needs

150+ pre-made tests.

From web development and database administration to project management and customer support. See all pre-made tests .

From JavaScript and SQL to English and customer support. See all questions to filter by skill .

Multi-skills Test

Mix questions for different skills or even custom questions in one test. See an example .

How TestDome works

Choose a pre-made test or create a custom test

Invite candidates via email, URL, or your ATS

Candidates take a test remotely

Sort candidates and get individual reports

Want to know more?

What is a Cognitive Test?

How difficult is the cognitive ability test, free practice cognitive reasoning test questions, frequently asked questions, cognitive ability test.

Updated July 16, 2024

Edward Melett

A cognitive test is an assessment tool designed to measure an individual's cognitive abilities, which are the mental processes involved in acquiring, processing, storing and using information.

Cognitive assessments are used to evaluate various aspects of cognitive functioning, including memory, attention, problem-solving, reasoning, language comprehension, and more.

Cognitive function tests are commonly employed in several contexts, including education, clinical psychology, neuropsychology and employment assessment.

This cognitive ability practice test has been designed to help you prepare for the real thing.  

Prepare for Any Job Assessment Test

The test consists of a set of 10 questions, along with correct answers and full explanations.

What are the Topics Covered in a Cognitive Functions Test?

Verbal reasoning.

A verbal reasoning test is a type of cognitive assessment designed to evaluate an individual's ability to understand and analyze written information, as well as to draw logical conclusions and make inferences based on that information.

These test reading comprehension, critical thinking,inference and deduction, vocabulary and language skills and textual analysis.

Numerical Reasoning

A numerical reasoning test is a type of cognitive assessment designed to evaluate an individual's ability to work with numerical information, perform mathematical operations and make logical deductions based on numerical data.

These test mathematical problem solving, data interpretation, critical thinking and numerical literacy.

Logical Reasoning

A logical reasoning test, also known as a logical aptitude test or logical thinking test, is a type of cognitive assessment designed to evaluate an individual's ability to think logically, critically analyze information and make deductions based on structured patterns and rules.

These test pattern recognition, critical thinking, deductive and inductive reasoning.

Figural Reasoning

A figural reasoning test, also known as a non-verbal reasoning test, is a type of cognitive assessment that evaluates an individual's ability to analyze and solve problems using visual or abstract patterns and shapes, rather than relying on language or numbers.

These test visual patterns and shapes, pattern recognition, spatial skills and critical thinking.

At the end of the test if you would like further practice, you can find more tests like this cognitive ability test at JobTestPrep .

PACK: WOLVES

Prepare for Any Job Assessment Test with TestHQ

How can I test my cognitive ability?

You can test your cognitive ability through various cognitive assessments and tests that are designed to measure different aspects of cognitive functioning. These tests can be administered by educational institutions, employers or qualified professionals.

To get an idea of your cognitive abilities, you can also explore online cognitive tests and brain training apps, although these may not provide as accurate or comprehensive results as professionally administered tests.

How to prepare for the cognitive ability assessment?

While cognitive ability assessments are designed to measure innate abilities and skills, there are some general strategies you can use to prepare:

  • Get enough rest and sleep before the assessment.
  • Practice with sample questions and familiarize yourself with the test format if possible.
  • Manage your stress and anxiety through relaxation techniques.
  • Follow any specific instructions or guidelines provided by the test administrator.
  • Be sure to arrive on time for the assessment and be well-rested and focused.

How long does a cognitive ability test take?

The duration of a cognitive ability test can vary widely depending on the specific test and its complexity. Some tests may take as little as 15-20 minutes, while others, especially comprehensive assessments, may take several hours. The length of the test is typically determined by the number and types of questions included.

How is a cognitive ability assessment scored?

Cognitive ability assessments are typically scored based on the number of correct answers. Some tests may also consider the time taken to complete each section or question, and in such cases, speed and accuracy are both important factors. Scores may be compared to a normative group to determine how an individual's performance compares to the average or to establish percentiles.

Is cognitive ability an IQ test?

Cognitive ability assessments are closely related to IQ tests, but they are not always the same. IQ (Intelligence Quotient) tests are a specific type of cognitive ability test that measures a range of cognitive skills, including problem-solving, logical reasoning and spatial intelligence.

However, there are other cognitive tests that may focus on specific cognitive domains, such as verbal reasoning, numerical reasoning, or figural reasoning.

IQ tests are a subset of cognitive ability assessments but are often used interchangeably with the term "cognitive ability test" in common language.

Job Test Prep

You might also be interested in these other PRT articles:

Cognitive Ability Tests: Practice Test Questions, Answers & Explanations

Problem-Solving Aptitude Test Questions 2024 - Placement Mock Exam Papers with Answers

Practice free online solved Problem-Solving Aptitude Mock Test 2024: Download previous year's solved Problem-Solving placement exam question papers with answers.

  • What is Problem-Solving aptitude test?
  • How to prepare for Problem-Solving aptitude test?
  • What is the difficulty level of Problem-Solving aptitude test?
  • Tips to pass the latest Problem-Solving placement exam 2024.
  • Practice 100+ Problem-Solving aptitude test papers.
  • Top Problem-Solving aptitude mock tests.

What is Problem-Solving Aptitude Test?

Problem-Solving placement aptitude test is a popular pre-employment cognitive ability assessment. To make better hiring decisions, companies conducts a Problem-Solving test to predict candidates' competency level and likelihood of success in a job role. A company uses Problem-Solving tests during the recruitment process to compare applicants.

Placement aptitude test is designed to assess the Problem-Solving skills of the candidates. Problem-Solving placement aptitude test contains multiple-choice questions.

How to Pass Problem-Solving Aptitude Test?

If you are going to appear in the upcoming Problem-Solving aptitude test 2024, follow these top 5 tips to easily pass the test:

[1] Practice Realistic Problem-Solving Online Tests

If you want to pass the Problem-Solving aptitude test, then you need to practice a lot. You should take online tests under simulated exam conditions and start practicing tricky questions for the Problem-Solving aptitude test.

[2] Know the Latest Exam Format of the Problem-Solving Placement Test

Researching the format of the Problem-Solving aptitude test beforehand will prevent any surprises during the real exam conducted by Problem-Solving.

[3] Focus on your Weakest Areas

While practicing time-bound Problem-Solving mock tests, try to find some difficult topics from the Problem-Solving aptitude test syllabus and create strategies to improve exam results.

[4] Manage your Time Carefully During the Problem-Solving Mock Test Papers

Time-bound Problem-Solving mock tests are conducted under strict time restrictions. Keep an eye on the clock during the Problem-Solving aptitude test and work steadily.

[5] What if I Fail Problem-Solving Aptitude Exam on my First Attempt?

Problem-Solving is not that difficult but some candidates are not able to score high in the Problem-Solving aptitude test on their first attempt. Such applicants are advised to practice the previous year's Problem-Solving aptitude questions. It will help candidates understand the Problem-Solving aptitude test format. This way, test givers will be able to score well in the Problem-Solving section of the placement examination next time.

Is Problem-Solving Aptitude Test Difficult?

The overall difficulty level of the Problem-Solving placement aptitude exam is moderate. Problem-Solving aptitude question papers are one of the important study materials for preparing effectively for the Problem-Solving placement exam. Candidates should analyse Problem-Solving aptitude test papers thoroughly to find the most important and scoring topics of the latest Problem-Solving aptitude exam pattern and syllabus.

Best Tips to Prepare for Problem-Solving Aptitude Test for the Latest Campus Placements 2024

Watch this video if you are searching:

  • Top tips to prepare and pass Problem-Solving Aptitude Test.
  • Mistakes to avoid during Problem-Solving placement exam preparation.
  • How to clear Problem-Solving aptitude test without preparation?
  • How to clear aptitude tests in Problem-Solving campus placement papers?
  • How to prepare for Problem-Solving aptitude test in one day?
  • Free Problem-Solving aptitude tests.
  • Online Problem-Solving aptitude tests.

Practice 100+ Free Online Problem-Solving Aptitude Tests with Solved Questions and Answers

When you are preparing for the Problem-Solving aptitude test, do not forget to practice with old Problem-Solving question papers. There are many good sources online, where you can download PDF exam papers or practice free Problem-Solving aptitude mock tests:

  • Free Online PROBLEM-SOLVING TEST Practice & Preparation Tests
  • Use our problem-solving test to hire the best - TestGorilla
  • Problem-Solving Skills Assessment Test | Vervoe
  • 16 Problem-Solving Test Interview Questions & Answers
  • McKinsey Problem-Solving Test Practice Test A
  • Analytical Reasoning Tests: Free Online Questions & Tips
  • Problem-Solving Quizzes & Trivia - ProProfs
  • Logical Reasoning Test: 100s Of Free Practice Questions (2022)
  • Free Bairesdev Problem-Solving Test Prep - 12minprep
  • What is Problem-Solving? Steps, Process & Techniques | ASQ
  • Problem-Solving Ability Test - MeritTrac
  • Problem-Solving Skills Test - Mettl
  • Problem-Solving Assessment and Tests | Discover Assessments
  • Problem-solving | HackerRank
  • Reasoning Puzzles Questions and Answers - Free Online Test
  • Problem-Solving Quizzes | Study.com
  • McKinsey Problem-Solving Game - Guide & Mock Test
  • Creative Problem-Solving Test - Psychology Today
  • How To Test Problem-Solving Skills In Tech Interview
  • Top 20 Problem-Solving Interview Questions (Example Answers …
  • Free Analytical Reasoning Test Practice for Jobs - 2022
  • Logical Problems Quiz - Reasoning Questions and Answers
  • Problem-Solving Flashcards, test questions and answers
  • PISA Test - OECD
  • Problem-Solving Reasoning - key concepts with solved examples
  • Problem-Solving Skills Test | Soft Skills & Management Skills …
  • Online Problem-Solving Skills Test For Recruitment
  • Problem-Solving Test - 9 - MBA Entrance | GRE | SAT | GMAT
  • Problem-Solving and Decision Making Free Practice Test
  • Test: Problem-Solving- 1 | 10 Questions MCQ Test General Test ...
  • McKinsey Assessment Test: Free Practice Questions (2022)
  • 26 Good Examples of Problem-Solving (Interview Answers)
  • 8 Common Problem-Solving Interview Questions and Answers
  • Problem-Solving Questions and Answers for Interviews
  • 250+ TOP MCQs on Problem-Solving and Answers
  • Free Problem-Solving Online Practice Tests - WizIQ
  • Problem-solving test: Telomere replication - PubMed
  • Problem-Solving - Advanced Test: Assess and Hire the best …
  • Problem-Solving Assessments | Aptitude Practice Tests | Best …
  • Problem-Solving Games, Activities & Exercises for Adults
  • A comprehensive guide to the McKinsey PST and how to prepare
  • Problem-Solving Test | HighMatch
  • 10 Interview Questions to Determine Problem-Solving Skills
  • Logical Reasoning Tests: A 2022 Guide - Psychometric Success
  • Bain Online Test: Overview & Samples | MConsultingPrep
  • Sample GMAT Problem-Solving Questions, With Answers
  • 10 problem-solving interview questions to find top talent

Top Problem-Solving Placement Aptitude Mock Tests

  • Problem-Solving Aptitiude tests: 3i Infotech, AAI, ABACUS, ABB.
  • Problem-Solving Aptitiude tests: Accel Frontline, Accenture, Aditi, Adobe.
  • Problem-Solving Aptitiude tests: ADP, Agreeya, Akamai, Alcatel Lucent.
  • Problem-Solving Aptitiude tests: Allfon, Alumnus, Amazon, Amdocs.
  • Problem-Solving Aptitiude tests: AMI, Andhra Bank, AppLabs, Apps Associates.
  • Problem-Solving Aptitiude tests: Aricent, Ashok Leyland, Aspire, Atos Origin.
  • Problem-Solving Aptitiude tests: Axes, Bajaj, Bank of Maharashtra, BEL, BEML.
  • Problem-Solving Aptitiude tests: BHEL, BirlaSoft, Blue Dart, Blue Star.
  • Problem-Solving Aptitiude tests: BOB, BPCL, BPL, Brakes.
  • Problem-Solving Aptitiude tests: BSNL, Cadence, Calsoft, Canara Bank.
  • Problem-Solving Aptitiude tests: Canarys, Capgemini, Caritor, Caterpillar.
  • Problem-Solving Aptitiude tests: CDAC, C-DOT, CGI, Changepond.
  • Problem-Solving Aptitiude tests: Ciena, Cisco, Citicorp, CMC.
  • Problem-Solving Aptitiude tests: Consagous, Convergys, CORDYS, Crompton.
  • Problem-Solving Aptitiude tests: CSC, CTS, Cummins, Dell, Deloitte.
  • Problem-Solving Aptitiude tests: Delphi-TVS, DeShaw, Deutsche, Dotcom.
  • Problem-Solving Aptitiude tests: DRDO, EDS, ELGI, ELICO.
  • Problem-Solving Aptitiude tests: EIL, ERICSSON, Essar, Fidelity.
  • Problem-Solving Aptitiude tests: Flextronics, Freescale, FXLabs, GAIL.
  • Problem-Solving Aptitiude tests: GE, Genpact, Geodesic, Geometric.
  • Problem-Solving Aptitiude tests: Globaledge, GlobalLogic, Godrej, Google.
  • Problem-Solving Aptitiude tests: Grapecity, HAL, HCL, Hexaware.
  • Problem-Solving Aptitiude tests: Honeywell, HP, HPCL, HSBC, Huawei.
  • Problem-Solving Aptitiude tests: Hughes, IBM, IBS, ICICI.
  • Problem-Solving Aptitiude tests: iGate, Impetus, iNautix, Indian Airforce.
  • Problem-Solving Aptitiude tests: Indian Airlines, Infosys, Infotech, Intec.
  • Problem-Solving Aptitiude tests: Integra, Intergraph, IOCL, iSOFT.
  • Problem-Solving Aptitiude tests: ISRO, Ittiam, JSW, Keane.
  • Problem-Solving Aptitiude tests: Kenexa, L & T, L & T Infotech, LG Soft.
  • Problem-Solving Aptitiude tests: Lifetree, LionBridge, Mahindra Satyam, Mastek.
  • Problem-Solving Aptitiude tests: Maveric, McAfee, MECON, Microsoft, MindTree.
  • Problem-Solving Aptitiude tests: Miraclesoft, Mistral, Motorola, Mphasis.
  • Problem-Solving Aptitiude tests: MTNL, NIC, Nokia Siemens, Novell.
  • Problem-Solving Aptitiude tests: NTPC, Nucleus, ORACLE, Patni.
  • Problem-Solving Aptitiude tests: Perot, Polaris, Ramco, Robert Bosch.
  • Problem-Solving Aptitiude tests: Samsung, SAP, Sapient, Sasken.
  • Problem-Solving Aptitiude tests: SBI, Sierra Atlantic, Sonata, Sony India.
  • Problem-Solving Aptitiude tests: Sutherland, Syntel, TCS, Tech Mahindra.
  • Problem-Solving Aptitiude tests: VeriFone, Virtusa, Wipro, Zensar.

Other Placement Aptitude Tests

  • General Aptitude Test.
  • Quantitative Aptitude Test.
  • Verbal Ability Aptitude Test.
  • Logical Reasoning Aptitude Test.
  • Cognitive Speed Aptitude Test.
  • Critical Thinking Aptitude Test.
  • Decision-Making Aptitude Test.
  • Problem-Solving Aptitude Test.
  • Psychometric Aptitude Test.
  • Spatial Reasoning Aptitude Test.

Aptitude Questions and Answers

Aptitude interview questions and answers.

Here you can find Aptitude interview questions and answers for your placement interviews and entrance exam preparation.

Why should I learn to solve Aptitude questions?

Learn and practise solving Aptitude questions to enhance your skills so that you can clear interviews, competitive examinations, and various entrance tests (CAT, GATE, GRE, MAT, bank exams, railway exams, etc.) with full confidence.

Where can I get Aptitude questions and answers with explanations?

IndiaBIX provides you with numerous Aptitude questions and answers with explanations. Fully solved problems with detailed answer descriptions and explanations are given and will be easy to understand.

Where can I get Aptitude MCQ interview questions and answers (objective type, multiple choice)?

Here you can find multiple-choice-type Aptitude questions and answers for your interviews and entrance examinations. Objective-type and true-or-false-type questions are also given here.

How do I download Aptitude questions in PDF format?

You can download Aptitude quiz questions and answers as PDF files or eBooks.

How do I solve Aptitude quiz problems?

You can easily solve all kinds of quiz questions based on Aptitude by practising the given exercises, including shortcuts and tricks.

  • Problems on Trains
  • Time and Distance
  • Height and Distance
  • Time and Work
  • Simple Interest
  • Compound Interest
  • Profit and Loss
  • Partnership
  • Problems on Ages
  • Volume and Surface Area
  • Permutation and Combination
  • Problems on Numbers
  • Problems on H.C.F and L.C.M
  • Decimal Fraction
  • Simplification
  • Square Root and Cube Root
  • Surds and Indices
  • Ratio and Proportion
  • Pipes and Cistern
  • Boats and Streams
  • Alligation or Mixture
  • Races and Games
  • Stocks and Shares
  • Probability
  • True Discount
  • Banker's Discount
  • Odd Man Out and Series

Current Affairs

Interview questions, group discussions.

  • Data Interpretation
  • Verbal Ability
  • Verbal Test
  • Python Programming
  • C Programming
  • C++ ,   C#
  • Technical Interview
  • Placement Papers
  • Submit Paper

Why employment aptitude tests are crucial for screening job applicants (plus the 7 best tests)

problem solving aptitude test examples

Your recruitment tools and practices have a direct impact on your outcomes.

Over-reliance on resumes limits your talent pool and provides insufficient data, which leaves you to base decisions on hunches and end up with mis-hires.

Pre employment aptitude tests solve these problems. 

As a candidate screening method, employment aptitude tests accurately predict role performance regardless of the applicants’ backgrounds – reducing bias, widening your talent pool, and ensuring you always make the right hire.

But what are aptitude tests, and which platform has the best online ones? Let’s take a look

Table of contents

What is an employment aptitude test, what are the different types of aptitude tests for employment, why is it important to use aptitude assessments, 7 best online aptitude tests for employment, how should you use employee aptitude tests in your recruitment process, how can you help your candidates prepare for your employment aptitude tests, use employment aptitude tests to hire experts for every position, employment aptitude test faqs.

An employment aptitude test is a tool employers use to fairly and objectively evaluate candidates’ hard and soft skill sets, personality traits, talents, values, and competencies required for a role. 

Employment aptitude tests are also distinct from career assessments, which help individuals explore the best career options according to their skills. Recruiters use aptitude evaluations to eliminate resumes and hire more effectively. 

TestGorilla’s report, “The State of Skills-Based Hiring 2024,” proves multi-measure employment aptitude tests to be the most accurate method of screening candidates and predicting good job performance for any role.

Below, we explain the five major types of pre-employment aptitude tests and their place in the candidate screening process.

5 types of aptitude tests for employment graphic

1. Role-specific skills tests

Role-specific evaluations are indispensable in any talent screening process because they ensure candidates have the practical abilities to excel in a role, from coding and writing to project management.

They gauge candidates’ technical aptitude, industry knowledge, proficiency with specific software, and other specialized skills.

TestGorilla’s Microsoft SQL Server and SQLite Online Skills tests are two good examples. The former evaluates a candidate’s in-depth knowledge of Microsoft SQL Server. 

The latter test assesses low-complexity database manipulation skills for junior backend developer screening with questions like the following:

An example question from TestGorilla's Microsoft SQL Server test and SQLite Online Skills test graphic

2. Reasoning tests

Reasoning assessments measure how well candidates evaluate and interpret information. There are a couple of different types: 

Numerical reasoning measures applicants’ ability to identify patterns in numbers, work with charts and graphs, and correctly interpret simple mathematical data.

Mechanical reasoning tests evaluate how well candidates understand basic mechanical concepts, such as velocity and pulleys.

Spatial reasoning determines whether individuals can analyze 2-D and 3-D objects. This evaluation is ideal for assessing candidates for science, technology, engineering, and mathematics (STEM) positions.

Reading comprehension tests an individual’s ability to draw conclusions from written passages and identify key information in the text. It’s useful for any role that involves research and analysis.

Verbal reasoning determines whether candidates can interpret written data and draw logical conclusions from what they read. The assessments use questions like the one below from our Verbal Reasoning test :

An example question from TestGorilla-s Verbal Reasoning test

3. Situational judgment tests

Decision-making and problem-solving are the two main types of situational judgment evaluations.

Decision-making assessments evaluate how effectively applicants analyze information to make the best choice among multiple options.

Problem-solving tests , meanwhile, assess candidates’ ability to interpret data, apply logic, organize information according to a set of rules, and respond appropriately to complex situations. They use realistic scenarios like the one pictured in TestGorilla’s example below to identify good problem solvers:

An example question from TestGorilla-s Problem Solving test

4. Personality tests

Being a good team member or leader takes more than technical skills. Hiring someone with unsuitable personality traits for the role negatively affects others in the workplace, but you can only see this side of your applicants if you evaluate their personality beforehand.

Addressing personality-related needs is also useful down the line because it encourages:

Enhanced company productivity by bringing out the best in employees

Improved wellbeing by accommodating personality differences

Better understanding between managers and their team members

Greater workplace diversity by selecting candidates with suitable traits

Scotiabank is a great example of a company that significantly improved workplace diversity by replacing resumes with personality tests. As a result, the share of Scotiabank’s new Black employees rose to 6% from 1%, and more than 50% of its hires are women.

While not always included in assessments, personality tests send candidates on an introspective journey showing how they react in different scenarios and fit within your team .

The preview below from TestGorilla’s Big Five (OCEAN) Personality test is an example of a standard personality evaluation question where candidates rate how accurately a sentence describes them.

An example question from TestGorilla's Big 5 test

Other personality evaluations include the Enneagram, DISC, and 16 Personality Types tests. Each test uses a different methodology.

Note: Pre-employment or career personality tests should never be the sole basis of your hiring decision-making. Many of them are meant to be a tool to better understand people, and they’re only useful in combination with other assessments.

Switch to skills-based hiring with TestGorilla

TestGorilla makes screening easy and effective. Get started for free with our Big 5 personality test and four other complimentary skills tests.

problem solving aptitude test examples

5. Values tests

Each employee shapes your company culture, so the Culture Add test or an equivalent should be a recruitment staple to evaluate a candidate’s values.

You should look for a culture add rather than a culture fit for each hire . Here’s the difference: 

Culture fit is a bias-riddled assessment of a candidate’s similarity to existing employees in terms of hobbies, interests, and career explorations. It’s all about how you “click” in the interview, which leads to a stagnant or insufficient work environment.

Culture add measures the candidate’s closeness to core organizational values and ability to positively contribute to the company.

According to SHRM, companies identify successful applicants when their measures are objective, and their criteria are linked to specific skills, abilities, values, and motivators of candidates.

A Culture Add test quantifies your values to assess and objectively compare applicants’ test results against company benchmarks. You set your custom benchmarks by answering questions like these:

An example question from TestGorilla's Culture Add test

Including this element in your aptitude test ensures your company gains the person it needs, even if, at first, it seems like an unusual career match.

We already mentioned that screening with aptitude assessments gives the best results, but what are the benefits of picking the right online aptitude test for a job? Let’s go over them below:

Save time on screening and interviewing

You save time by instantly giving all of your applicants the same assessment instead of poring over hundreds of resumes only to gain incomplete candidate data.

Measure skills accurately even if you don’t have equivalent knowledge in-house

The best employment aptitude tests are created by subject-matter experts who vouch for their validity. You can rely on them to thoroughly and accurately evaluate candidates’ abilities.

Evaluate culture add

Good screening platforms offer a culture add or similar evaluation to help you predict how candidates contribute to your culture. Culture add matters because even a highly specialized professional can fail because of a cultural mismatch.

Make better, more objective hiring decisions

Limit bias, fairly and easily compare results, and hire the best addition to your company instead of picking a confident interviewer who could be a poor fit or disqualifying a great candidate because of a resume gap.

Reduce turnover and your expenses

Employee aptitude tests increase the chances of hiring candidates who fit the role, reducing turnover rates and the high costs of finding someone new.

These are the best employment aptitude tests and platforms for candidate screening.

7 best online aptitude tests for employment graphic

1. TestGorilla

TestGorilla is among the top pre employment aptitude test providers available today. Our test library contains more than 300 tests, all developed by subject matter experts.  

Combine up to five individual tests for a bespoke assessment, and add custom questions – including those requiring a video reply – to any test. Most individual tests take around 10 minutes, with full assessments taking less than an hour.

Candidates can complete their assessments on their mobile devices. You then get easy-to-understand reports that automatically compare applicants and display the results.

In addition to testing cognitive ability, you can evaluate job-specific skills, situational judgment, language fluency, culture fit, soft skills, and personality traits, enabling you to create a holistic, multi-measure assessment within a single platform.

Ocean Outdoor UK, a digital out-of-home advertising firm, uses a combination of TestGorilla’s pre-employment aptitude tests and custom questions to evaluate candidates’ competencies. It identifies work ethic, critical thinking skills, attention to detail, and practical ability.

Since switching to TestGorilla, the company has experienced great results, including:

Saving up to 10 hours spent on screening and interviewing per candidate

Overhauling the long recruitment process and improving its candidate experience

Decreasing the rate of unsuccessful hires by around 44%

Transform the way you hire with TestGorilla

TestGorilla makes screening easy and effective. Having every test you need with customization features in one place streamlines your recruitment process and leads you to the best person for the role every time.

problem solving aptitude test examples

Truity has questionnaires for individuals who want to learn more about their skills or find their ideal career recommendations during their job search and for businesses that want to screen candidates.

The platform focuses on personality tests, including the Myers-Briggs test, DISC, Big Five, and the Enneagram. They also offer free career aptitude tests and career interest evaluations based on an individual’s traits.

3. Criteria Corp

Criteria Corp is a platform offering aptitude tests for employment measuring areas such as problem-solving, critical thinking, attention to detail, and learning ability.

Its mobile-ready products include cognitive tests and evaluations to measure how well candidates maintain, install, operate, and repair machinery. The platform also has a language-independent cognitive assessment for candidates whose first language isn’t English.

4. TalentLens

Developed by Pearson, an educational publisher, TalentLens is an online platform offering employment aptitude tests that are accessible to candidates through their mobile devices.

Tests include cognitive assessments to measure adaptability, critical reasoning, problem-solving, and learning ability. The platform’s personality assessments evaluate work style, behaviors, and motivators, while its language skills assessments examine how fluent candidates are in English, Spanish, French, or Dutch.

5. ThriveMap

ThriveMap is an online aptitude test provider specializing in high-volume hiring environments, such as manufacturing and call centers. Assessments are always based on the industry the candidate is applying for.

ThriveMap creates custom assessments for every role to measure applicants based on what the job entails. The drawback is that developing an examination for an open position can take 4-6 weeks.

PI Hire is an online aptitude assessment provider on The Predictive Index platform. Companies create a behavioral target for each position, listing the personality traits best suited for that role.

The examinations then evaluate each applicant based on this target, visually displaying which job-seekers are a match. You screen for traits such as collaboration, patience, and extroversion. However, job-specific skills testing isn’t available.

Prevue is a platform for companies that want to measure each applicant’s cognitive ability, personality, and motivation. It starts with a job profile that sets benchmarks for the position. Candidates are then invited to the assessments.

On the Prevue dashboard, you can see a visual representation of how close each applicant comes to the benchmark. For example, Prevue shows how each candidate lands on a scale between “highly cooperative” and “very competitive.”

Employee aptitude tests should be the step after the candidates’ application but before the interview.

You should customize tests to the role and the company and cover all factors that determine the potential for success: hard skills, soft skills, cognitive abilities, personality, culture add, and, in some cases, language.

TestGorilla has hundreds of examinations for any role and career path, enabling you to choose up to five as part of your talent assessment, as shown below.

Preview of TestGorilla's talent assessment creation process graphic

You can take further steps to tailor the assessment to your needs, such as:

Add custom questions using real-life scenarios the role is likely to deal with or coding questions for programming-related roles

Include qualifying questions to filter candidates before the testing stage

Enable special accommodation and anti-cheating settings

Request video answers as a mini-interview to help with filtering

Take full advantage of these features to get well-suited talent through the interview stage, but don’t discard the candidates that don’t advance to the open position.

Thank them for their time, and keep those with potential in the talent pool for potential careers to feed your proactive recruitment strategy.

Get the most out of skills-based recruiting with TestGorilla

Implementing talent assessments is easy using TestGorilla’s predeveloped test library with evaluations for any role. No more “gut feeling” hiring and expensive mis-hires. Sign up for a live demo to see how it works.

inline CTA image female

It all comes down to communication with candidates . Explaining your process goes a long way to help them prepare for the online aptitude testing and start a relationship on a good note. Here are some suggestions to get you started:

Outline what steps are in the hiring process

Manage expectations as soon as you review their application and keep the candidate’s experience front of mind. Research shows that have declined an offer because of poor candidate experience.

Provide information about the employment aptitude tests

Explain what the tests entail to ease anxiety and help candidates show up as their best selves. Show them a preview or example with a short note on how that helps you predict a good role match.

Inform them about the possible accommodations before testing begins

Timely tell candidates they can ask for special accommodations if they are non-fluent English speakers or have a disability.

It’s also a good idea to communicate that the screening process evaluates applicants’ suitability for the specific role and company by measuring various aspects, not just their industry expertise.

Therefore, it doesn’t mean they aren’t good enough for the company if they don't make it to the interview. It could be just the wrong type of career or time to work together.

You can still keep in touch and engage for a more suitable role in the future.

Integrating employment aptitude tests into the hiring process offers clear advantages to your recruitment team and candidates. Remember, the benefits include:

Mitigating unconscious bias in the recruitment process

Saving time during the candidate-sourcing stage

Easily comparing and selecting candidates

Check out our demo to integrate employment aptitude tests into your candidate screening process.

Take a product tour and see how to create your first assessment.

If you’re ready to upgrade your applicant screening process, get started with a Free forever plan today.

Still have questions about online aptitude assessment tests? Let’s look at the most commonly asked questions about these types of tests.

What is an aptitude test for a job?

An aptitude test for employment is a method of evaluating a candidate’s skills, abilities, beliefs, personality traits, and other aspects to predict potential for success in the role. A combination of online aptitude tests is the most reliable way to screen candidates.

What is usually asked in employment aptitude tests?

Employment aptitude tests typically ask reasoning, situational judgment, personality, culture, and role-related questions based on real scenarios. For example, a pre-employment assessment about critical thinking could ask candidates to pick a logical conclusion based on the provided premises. A coding test could require candidates to choose a strategy for reducing network traffic expenses when a large Javascript file takes up a lot of bandwidth.

The specific questions depend on the type and purpose of the evaluation.

Do aptitude tests really matter?

Business-related scientific research shows that reliable aptitude assessments are valid predictors of job performance . With an aptitude test for employment, you determine how well your prospective candidate performs in the role. The data you gather by examining abilities and traits is much more reliable than what you could gauge from resumes, even when qualifications appear exemplary.

Related posts

Why did the world-s largest HR association drop the E from DE&I featured image

Why did the world's largest HR association drop the E from DE&I?

Should you post your interview questions online graphic

Posting interview questions online: Should you do it?

How to write an aws developer job description featured image

How to write an AWS developer job description

Hire the best candidates with TestGorilla

Create pre-employment assessments in minutes to screen candidates, save time, and hire the best talent.

problem solving aptitude test examples

Latest posts

problem solving aptitude test examples

The best advice in pre-employment testing, in your inbox.

No spam. Unsubscribe at any time.

Hire the best. No bias. No stress.

Our screening tests identify the best candidates and make your hiring decisions faster, easier, and bias-free.

Free resources

problem solving aptitude test examples

This checklist covers key features you should look for when choosing a skills testing platform

problem solving aptitude test examples

This resource will help you develop an onboarding checklist for new hires.

problem solving aptitude test examples

How to assess your candidates' attention to detail.

problem solving aptitude test examples

Learn how to get human resources certified through HRCI or SHRM.

problem solving aptitude test examples

Learn how you can improve the level of talent at your company.

problem solving aptitude test examples

Learn how CapitalT reduced hiring bias with online skills assessments.

problem solving aptitude test examples

Learn how to make the resume process more efficient and more effective.

Recruiting metrics

Improve your hiring strategy with these 7 critical recruitment metrics.

problem solving aptitude test examples

Learn how Sukhi decreased time spent reviewing resumes by 83%!

problem solving aptitude test examples

Hire more efficiently with these hacks that 99% of recruiters aren't using.

problem solving aptitude test examples

Make a business case for diversity and inclusion initiatives with this data.

IELTS.NET - Your Ultimate Resource for Language Mastery

What are You Looking for?

  • Writing Task 1
  • Writing Task 2

Mastering IELTS Speaking: Describe a Time When You Had to Use Your Problem-Solving Skills

In the IELTS Speaking test, a common topic revolves around describing situations where you had to use your problem-solving skills. This article provides a comprehensive guide to tackling this question, ensuring you achieve the highest possible band score.

Table of Contents

  • 1 Introduction to the IELTS Speaking Test
  • 2.1 Sample Question and Answer
  • 3.1 Sample Answer
  • 3.2 Follow-Up Questions and Answers
  • 4.1 Example Questions and Answers
  • 5.1 Example Sentences
  • 6 Examiner’s Tips for Scoring High

Introduction to the IELTS Speaking Test

The IELTS Speaking test consists of three parts:

  • Part 1: Introduction and Interview – Here, the examiner will ask you general questions about yourself.
  • Part 2: Long Turn (Cue Card) – This part requires you to speak for 1-2 minutes on a given topic.
  • Part 3: Two-Way Discussion – This involves more abstract questions related to the topic discussed in Part 2.

Examiners grade your performance based on four criteria: Fluency and Coherence, Lexical Resource, Grammatical Range and Accuracy, and Pronunciation .

Common Questions in IELTS Speaking Part 1: Introduction and Interview

In Part 1, examiners aim to make you feel comfortable. Examples of common questions include:

  • Can you tell me about yourself?
  • What do you do?
  • What are your hobbies?

Sample Question and Answer

Question: What do you do?

Answer: “Currently, I am working as a software developer at a tech firm. My job primarily involves crafting efficient code and solving complex software problems. This role allows me to continually challenge myself and improve my problem-solving skills.”

Part 2: Long Turn (Cue Card)

For this part, you will receive a cue card like the one below.

Cue Card: Describe A Time When You Had To Use Your Problem-solving Skills .

You should say:

  • What the problem was
  • How you dealt with it
  • What the result was
  • And explain why you felt good about solving this problem

software bug

Sample Answer

“I remember a particular instance at my previous job where I had to solve a complex technical issue. Our company’s main software was suddenly crashing, and we had a major client presentation the next day. The problem was multifaceted , involving both the backend server and the frontend user experience.

Firstly , I analyzed the error logs to identify the root cause . It turned out to be a memory leak due to inefficient coding practices . To resolve this, I collaborated with the development team to isolate the faulty code and implemented a more efficient algorithm .

Secondly , once the code was fixed, I conducted extensive testing to ensure stability. This process involved creating various test scenarios and running the software through multiple checks.

As a result, our software was back up and running within a few hours, and the presentation went off without a hitch. I felt incredibly proud and satisfied because it showcased my ability to handle high-pressure situations and utilize my problem-solving skills effectively.”

Follow-Up Questions and Answers

Question: What skills are important for solving problems?

Answer: “I believe critical thinking , analytical abilities , and effective communication are paramount. Critical thinking allows you to break down the problem into manageable parts, analytical skills help you find the root cause, and communication ensures that everyone involved is on the same page.”

Question: Can problem-solving skills be learned or are they innate?

Answer: “While some people may naturally possess strong problem-solving abilities, I firmly believe that these skills can be learned and honed over time. Continuous practice and real-world experience play a crucial role in developing effective problem-solving strategies.”

Part 3: Two-Way Discussion

In Part 3, the examiner will ask more abstract questions related to problem-solving.

Example Questions and Answers

Question: How important are problem-solving skills in today’s world?

Answer: “Problem-solving skills are immensely important in today’s fast-paced world. With the rapid advancements in technology and the constant emergence of new challenges, being able to think critically and solve problems efficiently is a significant asset. These skills are crucial not just in the professional realm but also in everyday life situations.”

Question: Can you give an example of how technology has improved problem-solving abilities?

Answer: “Certainly, technology has revolutionized problem-solving in many ways. For instance, data analysis tools have made it easier to identify patterns and trends, allowing businesses to make more informed decisions . Additionally, collaborative platforms and communication technologies facilitate instant sharing of information, making it easier for teams to tackle issues collectively and more effectively.”

Key Vocabulary and Phrases for High Scores

Here are some important words and phrases to enrich your responses:

  • Analyze /ˈænəˌlaɪz/ (verb): Examine in detail to discover meaning, essential features.
  • Root cause (noun): The fundamental reason for the occurrence of a problem.
  • Multifaceted /ˌmʌltiˈfæsɪtɪd/ (adjective): Having many aspects or phases.
  • Critical thinking (noun): The objective analysis and evaluation of an issue to form a judgement.
  • Algorithm /ˈælɡəˌrɪðəm/ (noun): A process or set of rules to be followed in problem-solving operations.
  • Collaborate /kəˈlæbəˌreɪt/ (verb): Work jointly on an activity or project.

Example Sentences

  • “We decided to analyze the situation from multiple perspectives to identify the root cause .”
  • “The problem was multifaceted , requiring inputs from various departments.”
  • “By adhering to a structured algorithm , we managed to solve the issue efficiently.”
  • “Effective collaboration was key to addressing the challenges quickly.”

Examiner’s Tips for Scoring High

  • Practice regularly : Regular speaking practice improves fluency and confidence.
  • Use varied vocabulary : Demonstrate a broad range of vocabulary and avoid repetition.
  • Stay coherent : Ensure your answers are well-structured and logically organized.
  • Answer fully : Provide comprehensive answers to each question, incorporating relevant details and examples.
  • Stay calm : Confidence boosts performance, so practice staying calm under exam conditions.

In conclusion, mastering the IELTS Speaking test requires preparation and strategic practice. Use this guide to hone your problem-solving descriptions and approach your test with confidence!

Avatar of AI Mentor

Mastering the IELTS Speaking Test: Describe a Place Where You Go to Reflect on Your Goals

IELTS Exam Preparation Place

Describe a Place Where You Prepared for an Important Event

Surprise Birthday Party

Describe a Time When You Solved a Problem in a Creative Way

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Your Name *

Email Address *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Navigating Spatial Ability for Mathematics Education: a Review and Roadmap

  • REVIEW ARTICLE
  • Open access
  • Published: 17 August 2024
  • Volume 36 , article number  90 , ( 2024 )

Cite this article

You have full access to this open access article

problem solving aptitude test examples

  • Kelsey E. Schenck   ORCID: orcid.org/0000-0002-3777-2085 1 &
  • Mitchell J. Nathan   ORCID: orcid.org/0000-0003-2058-7016 2  

Spatial skills can predict mathematics performance, with many researchers investigating how and why these skills are related. However, a literature review on spatial ability revealed a multiplicity of spatial taxonomies and analytical frameworks that lack convergence, presenting a confusing terrain for researchers to navigate. We expose two central challenges: (1) many of the ways spatial ability is defined and subdivided are often not based in well-evidenced theoretical and analytical frameworks, and (2) the sheer variety of spatial assessments. These challenges impede progress in designing spatial skills interventions for improving mathematics thinking based on causal principles, selecting appropriate metrics for documenting change, and analyzing and interpreting student outcome data. We offer solutions by providing a practical guide for navigating and selecting among the various major spatial taxonomies and instruments used in mathematics education research. We also identify current limitations of spatial ability research and suggest future research directions.

Explore related subjects

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

Spatial ability can be broadly defined as imagining, maintaining, and manipulating spatial information and relations. Over the past several decades, researchers have found reliable associations between spatial abilities and mathematics performance (e.g., Newcombe, 2013 ; Young et al., 2018a ). However, the sheer plurality of spatial taxonomies and analytical frameworks that scholars use to describe spatial skills, the lack of theoretical spatial taxonomies, and the variety of spatial assessments available makes it very difficult for education researchers to make the appropriate selection of spatial measures for their investigations. Education researchers also face the daunting task of selecting the ideal spatial skills to design studies and interventions to enhance student learning and the development of reasoning in STEM (science, technology, engineering, and mathematics) more broadly. To address these needs, we have provided a review that focuses on the relationship between spatial skills and mathematical thinking and learning. Our specific contribution is to offer a guide for educational researchers who recognize the importance of measuring spatial skills but who are themselves not spatial skills scholars. This guide will help researchers navigate and select among the various major taxonomies on spatial reasoning and among the various instruments for assessing spatial skills for use in mathematics education research.

We offer three central objectives for this paper. First, we aim to provide an updated review of the ways spatial ability is defined and subdivided. Second, we list some of the currently most widely administered instruments used to measure subcomponents of spatial ability. Third, we propose an organizational framework that acknowledges this complex picture and — rather than offer overly optimistic proposals for resolving long-standing complexities — offers ways for math education researchers to operate within this framework from an informed perspective. This review offers guidance through this complicated state of the literature to help STEM education researchers select appropriate spatial measures and taxonomies for their investigations, assessments, and interventions. We review and synthesize several lines of the spatial ability literature and provide researchers exploring the link between spatial ability and mathematics education with a guiding framework for research design. To foreshadow, this framework identifies three major design decisions that can help guide scholars and practitioners seeking to use spatial skills to enhance mathematics education research. The framework provides a theoretical basis to select: (1) a spatial ability taxonomy, (2) corresponding analytical frameworks, and (3) spatial tasks for assessing spatial performance (Fig.  1 ). This guiding framework is intended to provide educational researchers and practitioners with a common language and decision-making process for conducting research and instruction that engages learners’ spatial abilities. The intent is that investigators’ use of this framework may enhance their understanding of the associative and causal links between spatial and mathematical abilities, and thereby improve the body of mathematics education research and practice.

figure 1

Major elements of an investigation into the role of spatial reasoning

The Importance of Spatial Reasoning for Mathematics and STEM Education

Spatial ability has been linked to the entrance into, retention in, and success within STEM fields (e.g., Shea et al., 2001 ; Wolfgang et al., 2003 ), while deficiencies in spatial abilities have been shown to create obstacles for STEM education (Harris et al., 2013 ; Wai et al., 2009 ). Although spatial skills are not typically taught in the general K-16 curriculum, these lines of research have led some scholars to make policy recommendations for explicitly teaching children about spatial thinking as a viable way to increase STEM achievement and retention in STEM education programs and career pathways (Sorby, 2009 ; Stieff & Uttal, 2015 ). Combined, the findings suggest that spatial ability serves as a gateway for entry into STEM fields (Uttal & Cohen, 2012 ) and that educational institutions should consider the importance of explicitly training students’ spatial thinking skills as a way to further develop students’ STEM skills.

Findings from numerous studies have demonstrated that spatial ability is critical for many domains of mathematics education, including basic numeracy and arithmetic (Case et al., 1996 ; Gunderson et al., 2012 ; Hawes et al., 2015 ; Tam et al., 2019 ) and geometry (Battista et al., 2018 ; Davis, 2015 ), as well as more advanced topics such as algebra word problem-solving (Oostermeijer et al., 2014 ), calculus (Sorby et al., 2013 ), and interpreting complex quantitative relationships (Tufte, 2001 ). For example, scores on the mathematics portion of the Program for International Student Assessment (PISA) are significantly positively correlated with scores on tests of spatial cognition (Sorby & Panther, 2020 ). Broadly, studies have found evidence of the connections between success on spatial tasks and mathematics tasks in children and adults. For example, first grade girls’ spatial skills were correlated with the frequency of retrieval and decomposition strategies when solving arithmetic problems (Laski et al., 2013 ), and these early spatial ability scores were the strongest predictors of their sixth-grade mathematics reasoning abilities (Casey et al., 2015 ). In adults ( n  = 101), spatial ability scores were positively associated with mathematics abilities measured through PISA mathematics questions (Schenck & Nathan, 2020 ).

Though there is a clear connection between spatial and mathematical abilities, understanding the intricacies of this relationship is difficult. Some scholars have sought to determine which mathematical concepts engage spatial thinking. For example, studies on specific mathematical concepts found spatial skills were associated with children’s one-to-one mapping (Gallistel & Gelman, 1992 ), missing-term problems (Cheng & Mix, 2014 ), mental computation (Verdine et al., 2014 ), and various geometry concepts (Hannafin et al., 2008 ). Schenck and Nathan ( 2020 ) identified associations between several specific sub-components of spatial reasoning and specific mathematics skills of adults. Specifically, adults’ mental rotation skills correlated with performance on questions about change and relationships, spatial orientation skills correlated with quantity questions, and spatial visualization skills correlated with questions about space and shape. Burte and colleagues ( 2017 ) proposed categories of mathematical concepts such as problem type, problem context, and spatial thinking level to target math improvements following spatial invention training. Their study concluded that mathematics problems that included visual representations, real-world contexts, and that involved spatial thinking are more likely to show improvement after embodied spatial training.

However, these lines of work are complicated by the variety of problem-solving strategies students employ when solving mathematics problems and issues with generalizability. While some students may rely on a specific spatial ability to solve a particular mathematics problem, others may use non-spatial approaches or apply spatial thinking differently for the same assessment item. For example, some students solving graphical geometric problem-solving tasks utilized their spatial skills by constructing and manipulating mental images of the problem, while others created external representations such as isometric sketches, alleviating the need for some aspects of spatial reasoning (Buckley et al., 2019 ). Though this difference could be attributed to lower spatial abilities in the students who used external representations, it could also be attributed to high levels of discipline-specific knowledge seen in domains such as geoscience (Hambrick et al., 2012 ), physics (Kozhevnikov & Thorton, 2006 ), and chemistry (Stieff, 2007 ). Though some amount of generalization is needed in spatial and mathematics education research, investigators should take care not to overgeneralize findings of specific spatial ability and mathematic domain connections.

This selective review shows ample reasons to attend to spatial abilities in mathematics education research and the design of effective interventions. However, studies across this vast body of work investigating the links between spatial abilities and mathematics performance use different spatial taxonomies, employ different spatial measures, and track improvement across many different topics of mathematics education. This variety makes it difficult for mathematics education scholars to draw clear causal lines between specific spatial skills interventions and specific mathematics educational improvements and for educators to follow clear guidance as to how to improve mathematical reasoning through spatial skills development.

The Varieties of Approaches to Explaining the Spatial-Mathematics Connection

Meta-analyses have suggested that domain-general reasoning skills such as fluid reasoning and verbal skills may mediate the relationships between spatial and mathematical skills (Atit et al., 2022 ), and that the mathematical domain is a moderator with the strongest association between logical reasoning and spatial skills (Xie et al., 2020 ). Despite these efforts, the specific nature of these associations remains largely unknown. Several lines of research have suggested processing requirements shared among mathematical and spatial tasks could account for these associations. Brain imaging studies have shown similar brain activation patterns in both spatial and mathematics tasks (Amalric & Dehaene, 2016 ; Hawes & Ansari, 2020 ; Hubbard et al., 2005 ; Walsh, 2003 ). Hawes and Ansari’s ( 2020 ) review of psychology, neuroscience, and education spatial research described four possible explanatory accounts (spatial representations of numbers, shared neuronal processing, spatial modeling, and working memory) for how spatial visualization was linked to numerical competencies. They suggest integrating the four accounts to explain an underlying singular mechanism to explain lasting neural and behavioral correlations between spatial and numerical processes. In a study of spatial and mathematical thinking, Mix et al. ( 2016 ) showed a strong within-domain factor structure and overlapping variance irrespective of task-specificity. They proposed that the ability to recognize and decompose objects (i.e., form perception ), visualize spatial information, and relate distances in one space to another (i.e., spatial scaling ) are shared processes required when individuals perform a range of spatial reasoning and mathematical reasoning tasks.

Efforts to date to document the relationship between mathematics performance and spatial skills or to enhance mathematics through spatial skills interventions show significant limitations in their theoretical framing. One significant issue is theory-based. Currently, there is no commonly accepted definition of spatial ability or its exact sub-components in the literature (Carroll, 1993 ; Lohman, 1988 ; McGee, 1979 ; Michael et al., 1957 ; Yilmaz, 2009 ). For example, many studies designed to investigate and improve spatial abilities have tended to focus on either a particular spatial sub-component or a particular mathematical skill. Much of the research has primarily focused on measuring only specific aspects of object-based spatial ability, such as mental rotation. Consequently, there is insufficient guidance for mathematics and STEM education researchers to navigate the vast landscape of spatial taxonomies and analytical frameworks, select the most appropriate measures for documenting student outcomes, design potential interventions targeting spatial abilities, select appropriate metrics, and analyze and interpret outcome data.

One notable program of research that has been particularly attentive to the spatial qualities of mathematical reasoning is the work by Battista et al. ( 2018 ). They collected think-aloud data about emerging spatial descriptions from individual interviews and teaching experiments with elementary and middle-grade students to investigate the relationship between spatial reasoning and geometric reasoning. Across several studies, the investigators seldom observed the successful application of generalized object-based spatial skills of the type typically measured by psychometric instruments of spatial ability. Rather, they found that students’ geometric reasoning succeeded when “spatial visualization and spatial analytic reasoning [were] based on operable knowledge of relevant geometric properties of the spatial-geometric objects under consideration” (Battista et al., p 226; emphasis added). By highlighting the ways that one’s reasoning aligns with geometric properties, Battista and colleagues shifted the analytic focus away from either general, psychological constructs that can be vague and overly broad, and away from a narrow set of task-specific skills, to a kind of intermediate-level that are relevant for describing topic and task-specific performance while identifying forms of reasoning that may generalize beyond the specific tasks and objects at hand. For example, property-based spatial analytic reasoning might focus on an invariant geometric property, such as the property of rectangles that their diagonals always bisect each other, to guide the decomposition and transformation of rectangles and their component triangles in service of a geometric proof. Establishing bridges and analytic distinctions between education domain-centric analyses of this sort and traditional psychometric accounts about domain-general spatial abilities is central to our review and broader aims to relate mathematical reasoning processes to spatial processes.

Selecting a Spatial Taxonomy

As noted, a substantial body of empirical evidence indicates that students’ spatial abilities figure into their mathematical reasoning, offering promising pathways toward interventions designed to improve math education. To capitalize on this association, one of the first decisions mathematics education researchers must make is selecting a spatial taxonomy that suits the data collected and analyzed. A spatial taxonomy is an organizational system for classifying spatial abilities and, thus, serves an important role in shaping the theoretical framework for any inquiry as well as interpreting and generalizing findings from empirical investigations. However, the manner in which spatial abilities are subdivided, defined, and named has changed over the decades of research on this topic. In practice, the decision for how to define and select spatial abilities is often difficult for researchers who are not specialists due to the expansive literature in this area.

In an attempt to make the vast number of spatial definitions and subcomponents more navigable for mathematics researchers and educators, we describe three general types of spatial taxonomies that are reflected in the current literature: Those that (1) classify according to different specific spatial abilities, (2) distinguish between different broad spatial abilities, and (3) those that treat spatial abilities as derived from a single, or unitary, factor structure. Although this is not a comprehensive account, these spatial taxonomies were chosen to highlight the main sub-factor dissociations in the literature.

Specific-Factor Structures

Since the earliest conceptualization (e.g., Galton, 1879 ), the communities of researchers studying spatial abilities have struggled to converge on one all-encompassing definition or provide a complete list of its subcomponents. Though the literature provides a variety of definitions of spatial ability that focus on the capacity to visualize and manipulate mental images (e.g., Battista, 2007 ; Gaughran, 2002 ; Lohman, 1979 ; Sorby, 1999 ), some scholars posit that it may be more precise to define spatial ability as a constellation of quantifiably measurable skills based on performance on tasks that load on specific individual spatial factors (Buckley et al., 2018 ). Difficulties directly observing the cognitive processes and neural structures involved in spatial reasoning have, in practice, spurred substantive research focused on uncovering the nature of spatial ability and its subcomponents. Historically, scholars have used psychometric methods to identify a variety of specific spatial subcomponents, including closure flexibility/speed (Carroll, 1993 ), field dependence/independence (McGee, 1979 ; Witkin, 1950 ), spatial relations (Carroll, 1993 ; Lohman, 1979 ), spatial orientation (Guilford & Zimmerman, 1948 ), spatial visualization (Carroll, 1993 ; McGee, 1979 ), and speeded rotation (Lohman, 1988 ). However, attempts to dissociate subfactors were often met with difficulty due to differing factor analytic techniques and variations in the spatial ability tests that were used (D'Oliveira, 2004 ). The subsequent lack of cohesion in this field of study led to different camps of researchers adopting inconsistent names for spatial subcomponents (Cooper & Mumaw, 1985 ; McGee, 1979 ) and divergent factorial frameworks (Hegarty & Waller, 2005 ; Yilmaz, 2009 ). Such a lack of convergence is clearly problematic for the scientific study of spatial ability and its application to mathematics education research.

In the last few decades, several attempts have been made to dissociate subcomponents of spatial ability further. Yilmaz ( 2009 ) combined aspects of the models described above with studies identifying dynamic spatial abilities and environmental spatial abilities to divide spatial ability into eight factors, which acknowledge several spatial skills (e.g., environmental ability and spatiotemporal ability) needed in real-life situations. More recently, Buckley et al. ( 2018 ) proposed an extended model for spatial ability. This model combines many ideas from the previously described literature and the spatial factors identified in the Cattell-Horn-Carroll theory of intelligence (see Schneider & McGrew, 2012 ). It currently includes 25 factors that can also be divided into two broader categories of static and dynamic, with the authors acknowledging that additional factors may be added as research warrants. It is unclear how a dissociation of this many subfactors could be practically applied in empirical research, which we regard as an important goal for bridging theory and research practices.

Dissociation Between Spatial Orientation and Rotational Spatial Visualization

Though specific definitions vary, many authors of the models discussed above agree on making a dissociation between spatial orientation and visualization skills. While performing perspective-taking (a subfactor of spatial orientation) and rotational spatial visualization tasks often involve a form of rotation, several studies have indicated that these skills are psychometrically separable. Measures for these skills often ask participants to anticipate the appearance of arrays of objects after either a rotation (visualization) of the objects or a change in the objects’ perspective (perspective-taking). Findings show that visualization and perspective-taking tasks have different error patterns and activate different neural processes (e.g., Huttenlocher & Presson, 1979 ; Kozhevnikov & Hegarty, 2001 ; Wraga et al., 2000 ). Perspective rotation tasks often lead to egocentric errors such as reflection errors when trying to reorient perspectives, while object rotation task errors are not as systematic (Kozhevnikov & Hegarty, 2001 ; Zacks et al., 2000 ). For example, to solve a spatial orientation/perspective-taking task (Fig.  2 A), participants may imagine their bodies moving to a new position or viewpoint with the objects of interest remaining stationary. In contrast, the objects in a spatial visualization task are often rotated in one’s imagination (Fig.  2 B). Behavioral and neuroscience evidence is consistent with these findings, suggesting a dissociation between an object-to-object representational system and a self-to-object representational system (Hegarty & Waller, 2004 ; Kosslyn et al., 1998 ; Zacks et al., 1999 ). Thus, within the specific-factor structure of spatial ability, spatial orientation/perspective-taking can be considered a separate factor from spatial visualization/mental rotation (Thurstone, 1950 ).

figure 2

Exemplars of spatial orientation, mental rotation, and non-rotational spatial visualization tasks. The spatial orientation task ( A ) is adapted from Hegarty and Waller’s ( 2004 ) Object Perception/Spatial Orientation Test. The mental rotation task ( B ) is adapted from Vandenberg and Kuse’s ( 1978 ) Mental Rotation Test. The non-rotational spatial visualization task ( C ) is adapted from Ekstrom et al.’s ( 1976 ) Paper Folding Task

Dissociation Between Mental Rotation and Non-rotational Spatial Visualization

The boundaries between specific factors of spatial ability are often blurred and context dependent. To address this, Ramful and colleagues ( 2017 ) have created a three-factor framework that clarifies the distinctions between spatial visualization and spatial orientation (see the “Dissociation Between Spatial Orientation and Rotational Spatial Visualization” section) by treating mental rotation as a separate factor. Their framework is unique in that they used mathematics curricula, rather than solely basing their analysis on a factor analysis, to identify three sub-factors of spatial ability: (1) mental rotation, (2) spatial orientation, and (3) spatial visualization. Mental rotation describes how one imagines how a two-dimensional or three-dimensional object would appear after it has been turned (Fig.  2 B). Mental rotation is a cognitive process that has received considerable attention from psychologists (Bruce & Hawes, 2015 ; Lombardi et al., 2019 ; Maeda & Yoon, 2013 ). Spatial orientation , in contrast, involves egocentric representations of objects and locations and includes the notion of perspective-taking (Fig.  2 A). Spatial visualization in their classification system (previously an umbrella term for many spatial skills that included mental rotation) describes mental transformations that do not require mental rotation or spatial orientation (Linn & Peterson, 1985 ) and can be measured through tasks like those shown in Fig.  2 C that involve operations such as paper folding and unfolding. Under this definition, spatial visualization may involve complex sequences in which intermediate steps may need to be stored in spatial working memory (Shah & Miyake, 1996 ). In mathematics, spatial visualization skills often correlate with symmetry, geometric translations, part-to-whole relationships, and geometric nets (Ramful et al., 2017 ).

Summary and Implications

As described above , decades of research on spatial ability have involved scholars using factor-analytic methods to identify and define various spatial sub-components. The results of these effects have created a multitude of specific-factor structures, with models identifying anywhere from two to 25 different spatial subcomponents. However, there are two dissociations that may be particularly important for mathematics education research. The first is the dissociation between spatial orientation and spatial visualization abilities. Spatial orientation tasks typically involve rotating one’s perspective for viewing an object or scene, while spatial visualization tasks require imagining object rotation. The second dissociation is between mental rotation and non-rotational spatial visualization. While this distinction is relatively recent, it separates the larger spatial visualization sub-component into tasks that either involve rotating imagined objects or a sequence of visualization tasks that do not require mental rotation or spatial orientation. The historical focus on psychometric accounts of spatial ability strove to identify constructs that could apply generally to various forms of reasoning, yet it has contributed to a complex literature that may be difficult for scholars who are not steeped in the intricacies of spatial reasoning research to parse and effectively apply to mathematics education.

Studies of mathematical reasoning and learning that rely on specific-factor structures can yield different results and interpretations depending on their choices of factors. For example, Schenck et al. ( 2022 ) fit several models using different spatial sub-factors to predict undergraduates’ production of verbal mathematical insights. The authors demonstrated that combining mental rotation and non-rotational spatial visualization into a single factor (per McGee, 1979 ) rather than separating them (per Ramful et al., 2017 ) can lead to conflicting interpretations on the relevance of these skills for improving mathematics. Some scholars argue that a weakness of many traditional specific-factor structures of spatial ability is that they rely on exploratory factor analysis rather than confirmatory factor analyses informed by a clear theoretical basis of spatial ability (Uttal et al., 2013 ; Young et al., 2018b ). Finding differing results based on small and reasonable analytic choices presents a serious problem for finding convergence of the role of particular spatial abilities on particular mathematics concepts.

Broad-Factor Structures

Alternative approaches to factor-analytic methods rely on much broader distinctions between spatial ability subcomponents. We refer to these alternatives as broad-factor structure approaches since their categorizations align with theoretically motivated combinations of specific spatial ability subfactors. Some scholars who draw on broad-factor structures have argued for a partial dissociation (Ferguson et al., 2015 ; Hegarty et al., 2006 , 2018 ; Jansen, 2009 ; Potter, 1995 ). Large-scale spatial abilities involve reasoning about larger-scale objects and space, such as physical navigation and environmental maps. Small-scale spatial abilities are defined as those that predominantly rely on mental transformations of shapes or objects (e.g., mental rotation tasks). A meta-analysis (Wang et al., 2014 ) examining the relationship between small- and large-scale abilities provided further evidence that these two factors should be defined separately. Hegarty et al. ( 2018 ) recommend measuring large-scale abilities through sense-of-direction measures and navigation activities. These scholars suggest that small-scale abilities, such as mental rotation, may be measured through typical spatial ability tasks like those discussed in the “Choosing Spatial Tasks in Mathematics Education Research” section of this paper.

Other lines of research that use broad-factor structures have drawn on linguistic, cognitive, and neuroscientific findings to develop a 2 × 2 classification system that distinguishes between intrinsic and extrinsic information along one dimension, and static and dynamic tasks another an orthogonal dimension (Newcombe & Shipley, 2015 ; Uttal et al., 2013 ). Intrinsic spatial skills involve attention to a single object's spatial properties, while extrinsic spatial skills predominately rely on attention to the spatial relationships between objects. The second dimension in this classification system defines static tasks as those that involve recognizing and thinking about objects and their relations. In contrast, dynamic tasks often move beyond static coding of the spatial features of an object and its relations to imagining spatial transformations of one or more objects.

Uttal and colleagues ( 2013 ) describe how this 2 × 2 broad-factor classification framework can be mapped onto Linn and Peterson’s ( 1985 ) three-factor model, breaking spatial ability into spatial perception, mental rotation, and spatial visualization sub-factors. Spatial visualization tasks fall into the intrinsic classification and can address static and dynamic reasoning depending on whether the objects are unchanged or require spatial transformations. The Embedded Figures Test (Fig.  3 A; Witkin et al., 1971 ) is an example of an intrinsic-static classification, while Ekstrom and colleagues’ ( 1976 ) Form Board Test and Paper Folding Test (Fig.  3 B) are two examples of spatial visualization tasks that measure the intrinsic-dynamic classification. Mental rotation tasks (e.g., the Mental Rotations Test of Vandenberg & Kuse, 1978 ) also represent the intrinsic-dynamic category. Spatial perception tasks (e.g., water level tasks; Fig.  3 C; see Inhelder & Piaget, 1958 ) capture the extrinsic-static category in the 2 × 2 because they require coding spatial position information between objects or gravity without manipulating them. Furthermore, Uttal et al. ( 2013 ) address a limitation of Linn and Peterson’s ( 1985 ) model by including the extrinsic/dynamic classification, which they note can be measured through spatial orientation and navigation instruments such as the Guilford-Zimmerman Spatial Orientation Task (Fig.  3 D; Guilford & Zimmerman, 1948 ).

figure 3

Exemplar tasks that map to Uttal and colleagues’ ( 2013 ) framework . The intrinsic-static task ( A ) is adapted from Witkin and colleagues’ ( 1971 ) Embedded Figures Test. The intrinsic-dynamic task ( B ) is adapted from Ekstrom and colleagues’ ( 1976 ) Paper Folding Task. The extrinsic-static task ( C ) is adapted from Piaget and Inhelder’s ( 1956 ) water level tasks. The extrinsic-dynamic task ( D ) is adapted from Guilford and Zimmerman’s ( 1948 ) Spatial Orientation Survey Test

Though Uttal et al.’s ( 2013 ) classification provides a helpful framework for investigating spatial ability and its links to mathematics (Young et al., 2018b ), it faces several challenges. Some critics posit that spatial tasks often require a combination of spatial subcomponents and cannot be easily mapped onto one domain in the framework (Okamoto et al., 2015 ). For example, a think-aloud task might ask students to describe a different viewpoint of an object. The student may imagine a rotated object (intrinsic-dynamic), imagine moving their body to the new viewpoint (extrinsic-dynamic), use a combination of strategies, or employ a non-spatial strategy such as logical deduction. Additionally, an experimental study by Mix et al. ( 2018 ) testing the 2 × 2 classification framework using confirmatory factor analysis on data from children in kindergarten, 3rd, and 6th grades failed to find evidence for the static-dynamic dimension at any age or for the overall 2 × 2 classification framework. This study demonstrates that there are limitations to this framework in practice. It suggests that other frameworks with less dimensionality may be more appropriate for understanding children's spatial abilities.

Even in light of these challenges, broad-factor taxonomies can benefit researchers who do not expect specific sub-factors of spatial ability to be relevant for their data or those controlling for spatial ability as part of an investigation of a related construct. Currently, no validated and reliable instruments have been explicitly designed to assess these broad-factor taxonomies. Instead, the scholars proposing these broad-factor taxonomies suggest mapping existing spatial tasks, which are usually tied to specific sub-factors of spatial ability, to the broader categories.

Unitary-Factor Structure

Many scholars understand spatial ability to be composed of a set of specific or broad factors. Neuroimaging studies have even provided preliminary evidence of a distinction between object-based abilities such as mental rotation and orientation skills (e.g., Kosslyn & Thompson, 2003 ). However, there is also empirical support for considering spatial ability as a unitary construct . Early studies (Spearman,  1927 ; Thurstone,  1938 ) identified spatial ability as one factor separate from general intelligence that mentally operates on spatial or visual images. Evidence for a unitary model of spatial ability proposes a common genetic network that supports all spatial abilities (Malanchini et al., 2020 ; Rimfeld et al., 2017 ). When a battery of 10 gamified measures of spatial abilities was given to 1,367 twin pairs, results indicated that tests assessed a single spatial ability factor and that the one-factor model of spatial ability fit better than the two-factor model, even when controlling for a common genetic factor (Rimfeld et al., 2017 ). In another study, Malanchini et al. ( 2020 ) administered 16 spatial tests clustered into three main sub-components: Visualization , Navigation , and Object Manipulation . They then conducted a series of confirmatory factor analyses to fit one-factor (Spatial Ability), two-factor (Spatial Orientation and Object Manipulation), and three-factor models (Visualization, Navigation, and Object Manipulation). The one-factor model gave the best model fit, even when controlling for general intelligence.

A unitary structure is beneficial for researchers interested in questions about general associations between mathematics and spatial ability or for those using spatial ability as a moderator in their analyses. However, to date, no valid and reliable instruments have been created to fit within the unitary taxonomy, such as those that include various spatial items. Instead, researchers who discuss spatial ability as a unitary construct often choose one or multiple well-known spatial measures based on a particular sub-factor of spatial ability (e.g., Boonen et al., 2013 ; Burte et al., 2017 ). This issue motivates the need for an evidence-based, theory-grounded task selection procedure as well as the need to develop a unitary spatial cognition measure. In the absence of a single spatial cognition measure designed to assess spatial ability from a unitary perspective, researchers will need to think critically about selecting measures and analytic frameworks for their studies to cover a range of spatial ability sub-factors and address the limitations of such decisions.

This section reviewed ways spatial abilities have been historically defined and subdivided, with a focus on three of the most widely reported taxonomies: specific-factor structure, broad-factor structure, and unitary structure. The specific-factor structure taxonomy includes subcomponents, such as spatial orientation and rotational and non-rotational spatial visualization, that primarily arise using factor-analytic methods such as exploratory factor analyses. However, discrepancies in factor analytic techniques and test variations led to divergent nomenclature and factorial frameworks. A few dissociations in spatial skills arose from these well-supported methods, such as the distinction between spatial orientation and perspective-taking. The broad-factor structures taxonomy dissociates spatial abilities based on theoretically motivated categories, such as large-scale and small-scale spatial abilities. While these classifications may be helpful for investigating the links between spatial abilities and mathematics, there is currently no empirical evidence to support using these frameworks in practice. The unitary structure taxonomy is based on factor-analytic evidence for a single, overarching spatial ability factor that is separate from general intelligence. Despite the potential advantages of simplicity, there are currently no valid and reliable instruments for measuring a single spatial factor, so this must be based on performance using instruments that measure performance for a specific factor or are imputed across multiple instruments. Additional complexities of directly applying existing measures to mathematics education research include the awareness that mathematical task performance often involves the use of a variety of spatial and non-spatial skills.

Choosing Spatial Tasks in Mathematics Education Research

The context of mathematical reasoning and learning often leads to scenarios where the choice of spatial sub-components influences interpretations. Given the complex nature of spatial ability and the reliance on exploratory rather than confirmatory analyses, there is a need for dissociation approaches with clearer theoretical foundations. Due to the absence of comprehensive spatial cognition measures that address the possible broad-factor and unitary structure of spatial ability, researchers often resort to well-established spatial measures focusing on specific sub-factors, necessitating critical consideration in task selection and analytical frameworks. Thus, there is a need for evidence-based, theory-grounded task selection procedures to help address the current limitations in spatial ability as it relates to mathematics education research.

With so many spatial ability taxonomies to choose from, education researchers must carefully select tasks and surveys that match their stated research goals and theoretical frameworks, the spatial ability skills of interest, and the populations under investigation. As mentioned, mathematics education researchers often select spatial tasks based on practical motivations, such as access or familiarity, rather than theoretical ones. These decisions can be complicated by the vast number of spatial tasks, with little guidance for which ones best align with the various spatial taxonomies. In recent years, there has been a concerted effort by groups such as the Spatial Intelligence and Learning Center (spatiallearning.org) to collect and organize a variety of spatial measurements in one place. However, there is still work to be done to create a list of spatial instruments that researchers can easily navigate. To help guide researchers with these decisions, we have compiled a list of spatial instruments referenced in this paper and matched them with their associated spatial sub-components and intended populations (Table  1 ). These instruments primarily consist of psychometric tests initially designed to determine suitability for occupations such as in the military before being adapted for use with university and high school students (Hegaryt & Waller, 2005 ). As such, the majority of instruments are intended to test specific spatial sub-components derived from factor-analytic methods and were created by psychologists for use in controlled laboratory-based studies rather than in classroom contexts (Atit et al., 2020 ; Lowrie et al., 2020 ). Therefore, we have organized Table  1 by specific spatial sub-components described in the “Specific-Factor Structures” section that overlap with skills found in mathematics curricula as proposed by Ramful and colleagues ( 2017 ).

Comparing the instruments in these ways reveals several vital gaps that must be addressed to measure spatial cognition in a way that correlates with mathematics and spatial abilities across the lifespan. In particular, this analysis reveals an over-representation of certain spatial sub-components, such as mental rotation and spatial visualization, which also map to quadrants of the 2 × 2 (intrinsic-extrinsic/static-dynamic) classification system described in the “Broad-Factor Structures” section. It shows a pressing need for more tasks explicitly designed for other broad sub-components, such as the extrinsic-static classifications. It also reveals that the slate of available instruments is dominated by tasks that have only been tested on adults and few measures that test more than one subcomponent. These disparities are essential for educational considerations and are taken up in the final section.

Due to the sheer number of spatial tasks, the observations that these tasks may not load consistently on distinct spatial ability factors and the lack of tasks that address broad and unitary factor structures, it is not possible in the scope of this review to discuss every task-factor relationship. As a practical alternative, we have grouped spatial ability tasks into three aggregated categories based on their specific-factor dissociations, as discussed in the previous section: Spatial orientation tasks, non-rotational spatial visualization tasks, and mental rotation tasks (for examples, see Fig.  2 ). We have chosen these three categories for two reasons: (1) there is empirical evidence linking these spatial sub-categories to mathematical thinking outcomes, and (2) these categories align with Ramful et al.’s ( 2017 ) three-factor framework, which is one of the only spatial frameworks that was designed with links to mathematical thinking in mind. We acknowledge that other scholars may continue to identify different aggregations of spatial reasoning tasks, including those used with mechanical reasoning and abstract reasoning tasks (e.g., Tversky, 2019 ; Wai et al., 2009 ). In our aggregated categories, mechanical reasoning tasks would align with either mental rotation or non-rotational tasks depending on the specific task demands. In contrast, abstract reasoning tasks would align most closely with non-rotational spatial visualization tasks.

As there are no universally accepted measures of spatial ability for each spatial factor, we have narrowed our discussion to include exemplars of validated, cognitive, pen-and-pencil spatial ability tasks. These tasks have been historically associated with various spatial ability factors rather than merely serving as measures of general intelligence or visuospatial working memory (Carroll, 1993 ) and are easily implemented and scored by educators and researchers without specialized software or statistical knowledge. Notably, this discussion of spatial ability tasks and instruments excludes self-report questionnaires such as the Navigational Strategy Questionnaire (Zhong & Kozhevnikov, 2016 ) and the Santa Barbara Sense of Direction Scale (Hegarty et al., 2002 ); navigation simulations such as the Virtual SILC Test of Navigation (Weisberg et al., 2014 ) and SOIVET-Maze (da Costa et al., 2018 ); and tasks that involve physical manipulation such as the Test of Spatial Ability (Verdine et al., 2014 ). As such, we were unable to find any published, validated instruments for large-scale spatial orientation, a sub-factor of spatial orientation, that meet our inclusion criteria.

Additionally, we would like to highlight one instrument that does not fit into the categories presented in the following sections but may be of use to researchers. The Spatial Reasoning Instrument (SRI; Ramful et al., 2017 ) is a multiple-choice test that consists of three spatial subscales (spatial orientation, spatial visualization, and mental rotation). Notably, the questions that measure spatial visualization are specifically designed not to require mental rotation or spatial orientation. Unlike previously mentioned instruments, the SRI is not a speed test, though students are given a total time limit. This instrument targets middle school students and was designed to align more closely with students’ mathematical curricular experiences rather than a traditional psychological orientation. Mathematical connections in the SRI include visualizing lines of symmetry, using two-dimensional nets to answer questions about corresponding three-dimensional shapes, and reflecting objects.

In the next sections, we detail the types of tasks and instruments commonly used to measure spatial orientation, non-rotational spatial visualization, and mental rotation. Ultimately, these help form a guide for navigating and selecting among the various instruments for assessing spatial skills in relation to mathematical reasoning.

Spatial Orientation Tasks

Much like spatial ability more generally, spatial orientation skills fit into the broad distinctions of large-scale (e.g., wayfinding, navigation, and scaling abilities) and small-scale (e.g., perspective-taking and directional sense) skills, with small-scale spatial orientation skills being shown to be correlated with larger-scale spatial orientation skills (Hegarty & Waller, 2004 ; Hegarty et al., 2002 ). Aspects of mathematical thinking that may involve spatial orientation include scaling, reading maps and graphs, identifying orthogonal views of objects, and determining position and location. Although few empirical studies have attempted to determine statistical associations between spatial orientation and mathematics, spatial orientation has been correlated with some forms of scholastic mathematical reasoning. One area of inquiry showed associations between spatial orientation and early arithmetic and number line estimation (Cornu et al., 2017 ; Zhang & Lin, 2015 ). In another, spatial orientation skills were statistically associated with problem-solving strategies and flexible strategy use during high school-level geometric and non-geometric tasks (Tartre, 1990 ). Studies of disoriented children as young as three years old show that they reorient themselves based on the Euclidean geometric properties of distance and direction, which may contribute to children's developing abstract geometric intuitions (Izard et al., 2011 ; Lee et al., 2012 ; Newcombe et al., 2009 ).

Historically, the Guilford-Zimmerman (GZ) Spatial Orientation Test ( 1948 ) was used to measure spatial orientation. Critics have shown that this test may be too complicated and confusing for participants (Kyritsis & Gulliver, 2009 ) and that the task involves both spatial orientation and spatial visualization (Lohman, 1979 ; Schultz, 1991 ). To combat the GZ Spatial Orientation Test problems, Kozhevnikov and Hegarty ( 2001 ) developed the Object Perspective Taking Test, which was later revised into the Object Perspective/Spatial Orientation Test (see Fig.  2 A; Hegarty & Waller, 2004 ). Test takers are prevented from physically moving the test booklet, and all items involved an imagined perspective change of at least 90°. Unlike previous instruments, results from the Object Perspective/Spatial Orientation Test showed a dissociation between spatial orientation and spatial visualization factors (though they were highly correlated) and correlated with self-reported judgments of large-scale spatial cognition. A similar instrument, the Perspective Taking Test for Children, has been developed for younger children. (Frick et al., 2014a , 2014b ). Additionally, simpler versions of these tasks that asked participants to match an object to one that has been drawn from an alternative point of view have also been used, such as those in the Spatial Reasoning Instrument (Ramful et al., 2017 ).

Non-Rotational Spatial Visualization Tasks

With differing definitions of spatial visualization, measures of this spatial ability sub-component often include tasks that evaluate other spatial ability skills, such as cross-sectioning tasks (e.g., Mental Cutting Test; CEEB, 1939 , and Santa Barbara Solids Test; Cohen & Hegarty, 2012 ), that may require elements of spatial orientation or mental rotation. Though these tasks may be relevant for mathematical thinking, this section focuses on tasks that do not overtly require mental rotation. Non-rotational spatial visualization may be involved in several aspects of mathematical thinking, including reflections (Ramful et al., 2015 ) and visual-spatial geometry (Hawes et al., 2017 ; Lowrie et al., 2019 ), visualizing symmetry (Ramful et al., 2015 ), symbolic comparison (Hawes et al., 2017 ), and imagining problem spaces (Fennema & Tarte, 1985 ). A recent study by Lowrie and Logan ( 2023 ) posits that developing students’ non-rotational spatial visualization abilities may be related to better mathematics scores by improving students generalized mathematical reasoning skills and spatial working memory.

The three tests for non-rotational spatial visualization come from the Kit of Factor-Referenced Cognitive Tests developed by Educational Testing Services (Ekstrom et al., 1976 ). These instruments were developed for research on cognitive factors in adult populations. The first instrument is the Paper Folding Test (PFT), one of the most commonly used tests for measuring spatial visualization (see Fig.  2 C). In this test, participants view diagrams of a square sheet of paper being folded and then punched with a hole. They are asked to select the picture that correctly shows the resulting holes after the paper is unfolded. Though this task assumes participants imagine unfolding the paper without the need to rotate, studies have shown that problem attributes (e.g., number and type of folds and fold occlusions) impact PFT accuracy and strategy use (Burte et al., 2019a ).

The second instrument is the Form Board Test. Participants are shown an outline of a complete geometric figure with a row of five shaded pieces. The task is to decide which of the shaded pieces will make the complete figure when put together. During the task, participants are told that the pieces can be turned but not flipped and can sketch how they may fit together.

The third instrument, the Surface Development Test, asks participants to match the sides of a net of a figure to the sides of a drawing of a three-dimensional figure. Like the PFT, strategy use may also impact accuracy on these two measures. This led to the development of a similar Make-A-Dice test (Burte et al., 2019b ), which relies on the number of squares in a row and consecutive folding in different directions rather than just increasing the number of folds to increase difficulty. Additionally, none of these three instruments were explicitly designed to test non-rotational spatial visualization but rather a broader definition of spatial visualization that includes mental rotation. Thus, it is possible that some participants’ strategies may include mental rotation or spatial orientation.

Other common types of spatial visualization tasks include embedded figures adapted from the Gottschaldt Figures Test (Gottschaldt, 1926 ). These tasks measure spatial perception, field-independence, and the ability to disembed shapes from a background, which may be a necessary problem-solving skill (Witkin et al., 1977 ). One instrument, the Embedded Figures Test, originally consisted of 24 trials during which a participant is presented with a complex figure, then a simple figure, and then shown the complex figure again with instructions to locate the simple figure within it (Witkin, 1950 ). Others have used Witkin’s ( 1950 ) stimuli as a basis to develop various embedded figures tests, including the Children’s Embedded Figures Test (Karp & Konstadt, 1963 ) and the Group Embedded Figure Test (Oltman et al., 1971 ).

Mental Rotation Tasks

Mental rotation can be broadly defined as a cognitive operation in which a mental image is formed and rotated in space. Though mental rotation skills are often subsumed under spatial visualization or spatial relations sub-components, they can be treated as a separate skill from spatial orientation and spatial visualization (Linn & Peterson, 1985 ; Shepard & Metzler, 1971 ). As many definitions of general spatial ability include a “rotation” aspect, several studies have investigated the links between mental rotation and mathematics. For young children, cross-sectional studies have shown mixed results. In some studies, significant correlations were found between mental rotation and both calculation and arithmetic skills (Bates et al., 2021 ; Cheng & Mix, 2014 ; Gunderson et al., 2012 ; Hawes et al., 2015 ). Conversely, Carr et al. ( 2008 ) found no significant associations between mental rotation and standardized mathematics performances in similar populations. In middle school-aged children (11–13 years), mental rotation skill was positively associated with geometry knowledge (Battista, 1990 ; Casey et al., 1999 ) and problem-solving (Delgado & Prieto, 2004 ; Hegarty & Kozhevnikov, 1999 ). Studies of high school students and adults have indicated that mental rotation is associated with increased accuracy on mental arithmetic problems (Geary et al., 2000 ; Kyttälä & Lehto, 2008 ; Reuhkala, 2001 ).

Behavioral and imaging evidence suggests that mental rotation tasks invoke visuospatial representations that correspond to object rotation in the physical world (Carpenter et al., 1999 ; Shepard & Metzler, 1971 ). This process develops from 3 to 5 years of age with large individual differences (Estes, 1998 ) and shows varying performance across individuals irrespective of other intelligence measures (Borst et al., 2011 ). Several studies have also demonstrated significant gender differences, with males typically outperforming females (e.g., Voyer et al., 1995 ). However, this gap may be decreasing across generations (Richardson, 1994 ), suggesting it is due at least in part to sociocultural factors such as educational experiences rather than exclusively based on genetic factors. Historically, three-dimensional mental rotation ability has fallen under the spatial visualization skill, while two-dimensional mental rotation occasionally falls under a separate spatial relations skill (e.g., Carroll, 1993 ; Lohman, 1979 ). Thus, mental rotation measures often include either three-dimensional or two-dimensional stimuli rather than a mixture of both.

Three-Dimensional Mental Rotation Tasks

In one of the earliest studies of three-dimensional mental rotation, Shepard and Metzler ( 1971 ) presented participants with pictures of pairs of objects and asked them to answer as quickly as possible whether the two objects were the same or different, regardless of differences in orientation. The stimuli showed objects that were either the same, differing in orientation, or mirror images of those objects. This design provided a nice control since the mirror images had comparable visual complexity but could not be rotated to match the original. Results revealed a positive linear association between reaction time and the angular difference in the orientation of objects. In combination with participant post-interviews, this finding illustrated that in order to make an accurate comparison between the object and the answer questions, participants first imagined the object as rotated into the same orientation as the target object and that participants perceived the two-dimensional pictures as three-dimensional objects in order to complete the imagined rotation. Additional studies have replicated these findings over the last four decades (Uttal et al., 2013 ). Shepard and Metzler-type stimuli have been used in many different instruments, including the Purdue Spatial Visualization Test: Rotations (Guay, 1976 ) and the Mental Rotation Test (see Fig.  2 B; Vandenberg & Kuse, 1978 ). However, recent studies have shown that some items on the Mental Rotation Test can be solved using analytic strategies such as global-shape strategy to eliminate answer choices rather than relying on mental rotation strategies (Hegarty, 2018 ).

One common critique of the Shepard and Metzler-type stimuli is that the classic cube configurations’ complex design is not appropriate for younger populations, leading to few mental rotation studies on this population. Studies have shown that children under 5 years of age have severe difficulties solving standard mental rotation tasks, with children between the ages of 5 and 9 solving such tasks at chance (Frick et al., 2014a , 2014b ). To combat this, studies with pre-school age children often lower task demands by reducing the number of answer choices, removing mirrored and incongruent stimuli, and using exclusively images of two-dimensional objects (Krüger, 2018 ; Krüger et al., 2013 ). In response, some scholars have begun developing appropriate three-dimensional mental rotation instruments for elementary school students, such as the Rotated Colour Cube Test (Lütke & Lange-Küttner, 2015 ). In this instrument, participants are presented with a stimulus consisting of a single cube with different colored sides and are asked to identify an identical cube that has been rotated. However, studies on both three-dimensional and two-dimensional rotation have found that cognitive load depends more on the stimulus angle orientation than the object’s complexity or dimensionality (Cooper, 1975 ; Jolicoeur et al., 1985 ).

Two-Dimensional Mental Rotation Tasks

To measure two-dimensional mental rotation, tasks for all populations feature similar stimuli. These tasks, often referred to as spatial relations or speeded rotation tasks, typically involve single-step mental rotation (Carroll, 1993 ). One common instrument for two-dimensional mental rotation is the Card Rotation Test (Ekstrom et al., 1976 ). This instrument presents an initial figure and asks participants to select the rotated but not reflected items. Importantly, these tasks can be modified for various populations (Krüger et al., 2013 ). One standardized instrument for pre-school and early primary school-age children, the Picture Rotation Test, demonstrates how easily these two-dimensional stimuli can be modified (Quaiser-Pohl, 2003 ).

This section aims to provide an updated review of the various ways in which spatial ability has been historically measured and critically evaluates these assessment tools. As the majority of these measures were designed based on specific-factor structures outlined in the “Specific-Factor Structures” section, we chose to organize our discussion by grouping assessments based on the specific factor it was intended to capture. We also decided to focus on the spatial sub-components that have been suggested to be linked to mathematical thinking, including spatial orientation, spatial visualization, and mental rotation. Ultimately, we found that although there are many spatial measures that researchers can choose from, there is a need for additional measures that address gaps in population and include more than one spatial subcomponent. Additionally, there is a critical need for spatial assessments that can be used in contexts outside of controlled laboratory and one-on-one settings to more deeply understand the complex connections between spatial ability and mathematics education in more authentic learning settings such as classrooms.

A Guiding Framework

We contend that the decisions made regarding the choice of spatial subdivisions, analytical frameworks, and spatial measures will impact both the results and interpretations of findings from studies on the nature of mathematical reasoning in controlled studies. One way these decisions affect the outcomes of a study is that they may change the specific spatial ability sub-components that reliably predict mathematics performance. This is because some factors of spatial ability have been shown to be more strongly associated with certain sub-domains of mathematics than with others (Delgado & Prieto, 2004 ; Schenck & Nathan, 2020 ), but it is unclear how generalizable these findings are as students may use a variety of spatial and non-spatial strategies. Additionally, some models and classifications of spatial ability, such as Uttal et al.’s ( 2013 ) classification and the unitary model of spatial ability, currently do not have validated instruments. Thus, selecting a spatial skills instrument poorly suited to the mathematical skills or population under investigation may fail to show a suitable predictive value. This can lead to an overall weaker model of the dependent variable and lead the research team to conclude that spatial reasoning overall is not relevant to the domain of mathematical reasoning interest. These limitations are often not discussed in the publications we reviewed and, perhaps, may not even be realized by many education researchers. However, as noted, it can be difficult for education researchers to select an appropriate framework among the many alternatives that match their specific domains of study.

Due to the various spatial taxonomies and the assumptions and design decisions needed for choosing the accompanying analytical frameworks, we assert that it is beneficial for most education researchers who do not identify as spatial cognition researchers to avoid attempts to create a specific, universal taxonomy of spatial ability. The evidence of the ways individuals interact with spatial information through the various spatial subcomponents may be based on a particular scholar's perspective of spatial ability, which should inform their choices of spatial taxonomies and analytical frameworks and measures based on their goals.

To help education researchers who may be unfamiliar with the vast literature on spatial ability navigate this large and potentially confusing landscape in service of their study objectives, we have designed a guide in the form of a flowchart that enables them to match spatial taxonomies to analytic frameworks (Fig.  4 ). Our guide, understandably, does not include every possible spatial taxonomy or study aim. Instead, it offers a helpful starting point for incorporating spatial skills into an investigation of mathematical reasoning by focusing on how researchers can draw on specific factor taxonomies and current validated measures of spatial ability in controlled studies.

figure 4

Flowchart for selecting the appropriate spatial taxonomy and analytic framework for one’s investigation

The first question in the flowchart, Q1, asks researchers to decide how spatial ability will be used in their investigation: either as a covariate or as the main variable of interest. If spatial ability is a covariate, the most appropriate taxonomy would be the unitary model to capture the many possible ways participants could utilize spatial thinking during mathematical reasoning. However, as mentioned in the above section, this model has no validated measure. Thus, we recommend researchers select several measures that cover a variety of specific spatial subcomponents, or a measure designed to test more than one spatial subcomponent, such as the Spatial Reasoning Instrument (Ramful et al., 2017 ). We would then suggest using an analytical framework with a single composite score across multiple tasks to combat issues such as task-related biases (Moreu & Weibels, 2021 ).

If spatial ability is the main variable of interest, answering Q2 in the flowchart directs the researcher to consider whether they are interested in investigating the role of spatial ability as a general concept or as one or more specific sub-components. For example, suppose the researcher is interested in understanding links between spatial ability and a specific mathematic domain. In that case, we recommend using the unitary model of spatial ability and following the recommendations outlined above for using spatial ability as a covariate. For example, Casey et al. ( 2015 ) found that children’s early spatial skills were long-term predictors of later math reasoning skills. In their analysis, the authors identified two key spatial skills, mental rotation, and spatial visualization, that previous work by Mix and Cheng ( 2012 ) found to be highly associated with mathematics performance. To measure these constructs, Casey and colleagues administered three spatial tasks to participants: a spatial visualization measure, a 2-D mental rotation measure, and a 3-D mental rotation measure. The authors were interested in the impact of overall spatial ability on analytical math reasoning and in partially replicating previous findings rather than whether these two factors impacted mathematics performance. Thus, they combined these three spatial scores into a single composite score.

For investigations centering around one or more specific spatial sub-components, we recommend novice researchers use sub-components from specific factor taxonomies (e.g., mental rotation, spatial visualization, spatial orientation). Specific-factor taxonomies are used in a variety of lines of research, including mathematics education. Studies exploring the association between spatial ability and mathematics often focus on a particular sub-factor. For example, some studies have focused on the association between mental rotation and numerical representations (e.g., Rutherford et al., 2018 ; Thompson et al., 2013 ), while others have focused on spatial orientation and mathematical problem solving (e.g., Tartre, 1990 ). Similarly, scholars investigating spatial training efficacy often use spatial tasks based on a single factor or a set of factors as pre- and post-test measures and in intervention designs (e.g., Bruce & Hawes, 2015 ; Gilligan et al., 2019 ; Lowrie et al., 2019 ; Mix et al., 2021 ).

The third question in the flowchart, Q3, asks researchers to select whether their investigation will focus on one particular spatial sub-component or several to provide guidance for analytic frames. In new studies, the sub-components of interest may be selected based on prior studies for confirmatory analyses or on a theoretical basis for exploratory studies. If only a single spatial sub-component is of interest to the investigation, we suggest an analytic approach that includes a single score from one task. If multiple spatial sub-components are relevant to the investigation, we recommend using a single score from one task for each sub-component of interest.

Task selection, the final step in the flow chart, will depend on practical considerations such as which spatial sub-components are relevant, population age, and time constraints. Though thousands of spatial tasks are available, the tasks listed in Table  1 , which also identifies corresponding broad and specific spatial sub-components, can be a useful starting point for designing a study. We recommend that researchers acknowledge that students may solve mathematical problems in various spatial and non-spatial ways and, thus, their results may not generalize to all students or all mathematical tasks and domains. We also remind researchers that the majority of the measures described in the “Choosing Spatial Tasks in Mathematics Education Research” section are designed as psychometric instruments for use in tightly controlled studies. The guidance above is not intended for studies that involve investigating spatial ability in classrooms or other in situ contexts.

Conclusions and Lingering Questions

Researchers largely agree that spatial ability is essential for mathematical reasoning and success in STEM fields (National Research Council, 2006 ). The two goals of this review were, first, to summarize the relevant spatial ability literature, including the various factor structures and measures, in an attempt to more clearly understand the elements of spatial ability that may relate most closely to mathematics education; and second, to provide recommendations for education researchers and practitioners for selecting appropriate theoretical taxonomies, analytical frameworks, and specific instruments for measuring, interpreting, and improving spatial reasoning for mathematics education. Our review exposed a wide array of spatial taxonomies and analytical frameworks developed by spatial ability scholars for understanding and measuring spatial reasoning. However, this review shows no convergence on a definition of spatial ability or agreement regarding its sub-components, no universally accepted set of standardized measures to assess spatial skills, and, most importantly, no consensus on the nature of the link between mathematical reasoning and spatial ability. Thus, this review exposes several challenges to understanding the relationship between spatial skills and performance in mathematics. One is that the connections between mathematical reasoning and spatial skills, while supported, are complicated by the divergent descriptions of spatial taxonomies and analytical frameworks, the sheer volume of spatial measures one encounters as a potential consumer, and a lack of a universally accepted means of mapping spatial measures to mathematical reasoning processes. These challenges should be seen as the responsibility of the educational psychology research communities. The lack of progress on these issues impedes progress in designing effective spatial skills interventions for improving mathematics thinking and learning based on clear causal principles, selecting appropriate metrics for documenting change, and for analyzing and interpreting student outcome data.

Our primary contribution in the context of these challenges is to provide a guide, well situated in the research literature, for navigating and selecting among the various major spatial taxonomies and validated instruments for assessing spatial skills for use in mathematics education research and instructional design. In order to anchor our recommendations, we first summarized much of the history and major findings of spatial ability research as it relates to education (“Selecting a Spatial Taxonomy” section). In this summary, we identified three major types of spatial taxonomies: specific, broad, and unitary, and provided recommendations for associated analytical frameworks. We then discussed the plethora of spatial ability tasks that investigators and educators must navigate (“Choosing Spatial Tasks in Mathematics Education Research” section). To make the landscape more tractable, we divided these tasks into three categories shown to be relevant to mathematics education — spatial orientation, mental rotation, and non-rotational spatial visualization (see Table  1 ) — and mapped these tasks to their intended populations. We acknowledge that researchers and educators often select spatial tasks and analytic frameworks for practical rather than theoretical reasons, which can undermine the validity of their own research and assessment efforts. To provide educators with a stronger evidence-based foundation, we then offered a guiding framework (“A Guiding Framework” section) in the form of a flowchart to assist investigators in selecting appropriate spatial taxonomies and analytic frameworks as a precursor to making well-suited task sections to meet their particular needs. A guide of this sort provides some of the best steps forward to utilizing the existing resources for understanding and improving education through the lens of spatial abilities. We focused on providing a tool to guide the decision-making of investigators seeking to relate spatial skills with mathematics performance based on the existing resources, empirical findings, and the currently dominant theoretical frameworks.

Several limitations remain, however. One is that the vast majority of published studies administered spatial skills assessments using paper-and-pencil instruments. In recent years, testing has moved online, posing new challenges regarding the applicability and reliability of past instruments and findings. Updating these assessments will naturally take time until research using online instruments and new immersive technologies catches up (see Uttal et al., 2024 , for discussion). A second limitation is that studies investigating the associations between spatial ability and mathematics have often focused on a particular spatial ability or particular mathematical skill. There are many unknowns about which spatial abilities map to which areas of mathematics performance. This limitation can only be addressed through careful, systematic, large-scale studies. A third limitation is that many of the instruments in the published literature were developed for and tested on adult populations. This greatly limits their applicability to school-aged populations. Again, this limitation can only be addressed through more research that extends this work across a broader developmental range. Fourth, many spatial ability instruments reported in the literature include tasks that may be solved using various strategies, some that are non-spatial, thus calling into question their construct validity of whether they measure the specific spatial skills they claim to measure. For example, some tasks in assessments, such as the Paper Folding Test may be effectively solved through non-spatial methods such as logic or counting rather than pure spatial visualization. Thus, there is a pressing need for process-level data, such as immediate retrospective reports and eye tracking (cf. Just & Carpenter, 1985 ), to accurately describe the various psychological processes involved and how they vary by age, individual differences, and assessment context. A fifth limitation relates to the 2 × 2 classification system using intrinsic and extrinsic information along one dimension and static and dynamic tasks along the other (Newcombe & Shipley, 2015 ; Uttal et al., 2013 ). In mapping existing tasks to this system, it became clear that there is a need for more development of extrinsic-static tasks and instruments. We found no studies investigating the link between mathematical reasoning and extrinsic-static spatial abilities, perhaps because of the lack of appropriate assessments. The sixth, and arguably greatest limitation is that scholarly research on spatial ability still lacks a convergent taxonomy and offers no clear picture as to which aspects of spatial thinking are most relevant to STEM thinking and learning. More research is needed to test additional models of spatial ability, such as the unitary model, and to expand spatial ability assessment tools to capture the complex and multifaceted nature of spatial thinking needed in mathematics education environments.

The objectives of this paper were to provide researchers with an updated review of spatial ability and its measures and to provide a guide for researchers new to spatial cognition to help navigate this vast literature when making study design decisions. Overall, research to understand the structure of spatial ability more deeply is at a crossroads. Spatial ability is demonstrably relevant for the development of mathematics reasoning and offers a malleable factor that may have a profound impact on the design of future educational interventions and assessments. Synthesizing these lines of research highlighted several areas that remain unexplored and in need of future research and development. STEM education and workforce development remain essential for scientific and economic advancements, and spatial skills are an important aspect of success and retention in technical fields. Thus, it is critical to further understand the connections between spatial and mathematical abilities as ways to increase our understanding of the science of learning and inform the design of future curricular interventions that transfer skills for science, technology, engineering, and mathematics.

Amalric, M., & Dehaene, S. (2016). Origins of the brain networks for advanced mathematics in expert mathematicians. Proceedings of the National Academy of Sciences, 113 (18), 4909–4917. https://doi.org/10.1073/pnas.1603205113

Article   Google Scholar  

Atit, K., Power, J. R., Pigott, T., Lee, J., Geer, E. A., Uttal, D. H., Ganley, C. M., & Sorby, S. A. (2022). Examining the relations between spatial skills and mathematical performance: A meta-analysis. Psychonomic Bulletin & Review, 29 , 699–720. https://doi.org/10.3758/s13423-021-02012-w

Atit, K., Uttal, D. H., & Stieff, M. (2020). Situating space: Using a discipline-focused lens to examine spatial thinking skills. Cognitive Research: Principle and Implications, 5 (19), 1–16.

Google Scholar  

Bates, K. E., Gilligan-Lee, K., & Farran, E. K. (2021). Reimagining mathematics: The role of mental imagery in explaining mathematical calculation skills in childhood. Mind, Brain, and Education, 15 (2), 189–198.

Battista, M. T. (1990). Spatial visualization and gender differences in high school geometry. Journal for Research in Mathematics Education, 21 (1), 47–60. https://doi.org/10.2307/749456

Battista, M. T. (2007). The development of geometric and spatial thinking. In F. K. Lester (Ed.), Second handbook of research on mathematics teaching and learning (pp. 843–908). Information Age Publishing.

Battista, M.T., Frazee, L. M., & Winer, M. L. (2018). Analyzing the relation between spatial and geometric reasoning for elementary and middle school students. In K. S. Mix & M. T. Battista (Eds.), Visualizing Mathematics. Research in Mathematics Education (pp. 195 – 228). Springer, Cham. https://doi.org/10.1007/978-3-319-98767-5_10

Boonen, A. J. H., van der Schoot, M., van Wesel, F., de Vries, M. H., & Jolles, J. (2013). What underlies successful world problem solving? A path analysis in sixth grade students. Contemporary Educational Psychology, 38 , 271–279. https://doi.org/10.1016/j.cedpsych.2013.05.001

Borst, G., Ganis, G., Thompson, W. L., & Kosslyn, S. M. (2011). Representations in mental imagery and working memory: Evidence from different types of visual masks. Memory & Cognition, 40 (2), 204–217. https://doi.org/10.3758/s13421-011-0143-7

Bruce, C. D., & Hawes, Z. (2015). The role of 2D and 3D mental rotation in mathematics for young children: What is it? Why does it matter? And what can we do about it? ZDM, 47 (3), 331–343. https://doi.org/10.1007/s11858-014-0637-4

Buckley, J., Seery, N., & Canty, D. (2018). A heuristic framework of spatial ability: A review and synthesis of spatial factor literature to support its translation into STEM education. Educational Psychology Review, 30 , 947–972. https://doi.org/10.1007/s10648-018-9432z

Buckley, J., Seery, N., & Canty, D. (2019). Investigating the use of spatial reasoning strategies in geometric problem solving. International Journal of Technology and Design Education, 29 , 341–362. https://doi.org/10.1007/s10798-018-9446-3

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2017). Think3d!: Improving mathematical learning through embodied spatial training. Cognitive Research: Principles and Implications, 2 (13), 1–8. https://doi.org/10.1186/s41235-017-0052-9

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2019). Knowing when to fold ‘em: Problem attributes and strategy differences in the Paper Folding Test. Personality and Individual Differences, 146 , 171–181.

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2019). Make-A-Dice test: Assessing the intersection of mathematical and spatial thinking. Behavior Research Methods, 51 (2), 602–638. https://doi.org/10.3758/s13428-018-01192-z

Carpenter, P. A., Just, M. A., Keller, T. A., Eddy, W., & Thulborn, K. (1999). Graded functional activation in the visuospatial system with the amount of task demand. Journal of Cognitive Neuroscience, 11 (1), 9–24. https://doi.org/10.1162/089892999563210

Carr, M., Steiner, H. H., Kyser, B., & Biddlecomb, B. (2008). A comparison of predictors of early emerging gender differences in mathematics competency. Learning and Individual Differences, 18 (1), 61–75. https://doi.org/10.1016/j.lindif.2007.04.005

Carroll, J. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press . https://doi.org/10.1017/CBO9780511571312

Case, R., Okamoto, Y., Griffin, S., McKeough, A., Bleiker, C., Henderson, B., Stephenson, K. M., Siegler, R. S., & Keating, D. P. (1996). The role of central conceptual structures in the development of children’s thought. Monographs of the Society for Research in Child Development, 61 (1/2), i–295. https://doi.org/10.2307/1166077

Casey, B. M., Nuttall, R. L., & Pezaris, E. (1999). Evidence in support of a model that predicts how biological and environmental factors interact to influence spatial skills. Developmental Psychology, 35 (5), 1237–1247. https://doi.org/10.1037/0012-1649.35.5.1237

Casey, B. M., Pezaris, E., Fineman, B., Pollock, A., Demers, L., & Dearing, E. (2015). A longitudinal analysis of early spatial skills compared to arithmetic and verbal skills as predictors of fifth-grade girls’ math reasoning. Learning and Individual Differences, 40 , 90–100. https://doi.org/10.1016/j.lindif.2015.03.028

Cheng, Y. L., & Mix, K. S. (2014). Spatial training improves children’s mathematics ability. Journal of Cognition and Development, 15 (1), 2–11. https://doi.org/10.1080/15248372.2012.725186

Cohen, C. A., & Hegarty, M. (2012). Inferring cross sections of 3D objects: A new spatial thinking test. Learning and Individual Differences, 22 (6), 868–874. https://doi.org/10.1177/1541931213601788

College Entrance Examination Board (CEEB). (1939).  Special aptitude test in spatial relations . College Entrance Examination Board, New York.

Cooper, L. A. (1975). Mental rotation of random two-dimensional shapes. Cognitive Psychology, 7 (1), 20–43. https://doi.org/10.1016/0010-0285(75)90003-1

Cooper, L. A., & Mumaw, R. J. (1985).Spatial aptitude. In R. F. Dillman (Ed.). Individual differences in cognition (2nd. Ed., pp.67–94). Academic Press.

Cornu, V., Hornung, C., Schiltz, C., & Martin, R. (2017). How do different aspects of spatial skills relate to early arithmetic and number line estimation? Journal of Numerical Cognition, 3 (2), 309–343. https://doi.org/10.5964/jnc.v3i2.36

da Costa, R., Pompeu, J. E., de Mello, D. D., Moretto, E., Rodrigues, F. Z., Dos Santos, M. D., Nitrini, R., Morganti, F., & Brucki, S. (2018). Two new virtual reality tasks for the assessment of spatial orientation Preliminary results of tolerability, sense of presence and usability. Dementia & Neuropsychologia, 12 (2), 196–204. https://doi.org/10.1590/1980-57642018dn12-020013

Davis, B. (2015). Spatial reasoning in the early years: Principles, assertions, and speculations . Routledge.

Delgado, A. R., & Prieto, G. (2004). Cognitive mediators and sex-related differences in mathematics. Intelligence, 32 , 25–32. https://doi.org/10.1016/S0160-2896(03)00061-8

D’Oliveira, T. (2004). Dynamic spatial ability: An exploratory analysis and a confirmatory study. The International Journal of Aviation Psychology, 14 (1), 19–38. https://doi.org/10.1207/s15327108ijap1401_2

Ekstrom, R. B., French, J. W., & Harmon, H. H. (1976). Manual for kit of factor-referenced cognitive tests . Educational Testing Service.

Estes, D. (1998). Young children’s awareness of their mental activity The case of mental rotation. Child Development, 69 (5), 1345–1360. https://doi.org/10.2307/1132270

Fennema, E., & Tartre, L. A. (1985). The use of spatial visualization in mathematics by girls and boys. Journal for Research in Mathematics Education, 16 (3), 184–206.

Ferguson, A. M., Maloney, E. A., Fugelsang, J., & Risko, E. F. (2015). On the relation between math and spatial ability: The case of math anxiety. Learning and Individual Differences, 39 , 1–12.

Frick, A., Hanson, M. A., & Newcombe, N. S. (2014). Development of mental rotation in 3- to 5-year-old children. Cognitive Development, 28 (4), 386–399. https://doi.org/10.1016/j.cogdev.2013.06.002

Frick, A., Möhring, W., & Newcombe, N. S. (2014). Picturing perspectives: Development of perspective-taking abilities in 4- to 8-year-olds. Frontiers in Psychology, 5 , 386. https://doi.org/10.3389/fpsyg.2014.00386

Gallistel, C. R., & Gelman, R. (1992). Preverbal and verbal counting and computation. Cognition, 44 (1/2), 43–74. https://doi.org/10.1016/0010-0277(92)90050-R

Galton, F. (1879). Generic Images. The Nineteenth Century, 6 (1), 157–169.

Gaughran, W. (2002). Cognitive modelling for engineers. In 2002 American Society for Engineering Education annual conference and exposition . Montréal, Canada: American Society for Engineering Education.

Geary, D. C., Saults, S. J., Liu, F., & Hoard, M. K. (2000). Sex differences in spatial cognition, computational fluency, and arithmetical reasoning. Journal of Experimental Child Psychology, 77 (4), 337–353. https://doi.org/10.1006/jecp.2000.2594

Gilligan, K. A., Thomas, M. S. C., & Farran, E. K. (2019). First demonstration of effective spatial training for near transfer to spatial performance and far transfer to a range of mathematics skills at 8 years. Developmental Science, 23 (4), e12909. https://doi.org/10.1111/desc.12909

Gottschaldt, K. (1926). Über den Einfluss der Erfahrung auf die Wahrnehmung von Figuren. Psychologische Forschung, 8 , 261–318. https://doi.org/10.1007/BF02411523

Guay, R. B. (1976). Purdue spatial visualization test . Purdue Research Foundation.

Guilford, J. P., & Zimmerman, W. S. (1948). The Guilford-Zimmerman aptitude survey. Journal of Applied Psychology, 32 (1), 24–34. https://doi.org/10.1037/h0063610

Gunderson, E. A., Ramirez, G., Beilock, S. L., & Levine, S. C. (2012). The relation between spatial skill and early number knowledge: The role of the linear number line. Developmental Psychology, 48 (5), 1229–1241. https://doi.org/10.1037/a0027433

Hambrick, D. Z., Libarkin, J. C., Petcovic, H. L., Baker, K. M., Elkins, J., Callahan, C. N., Turner, S. P., Rench, T. A., & LaDue, N. D. (2012). A test of the circumvention-of-limits hypothesis in scientific problem solving: The case of geological bedrock mapping. Journal of Experimental Psychology: General, 14 (3), 397–403. https://doi.org/10.1037/a0025927

Hannafin, R. D., Truxaw, M. P., Vermillion, J. R., & Liu, Y. (2008). Effects of spatial ability and instructional program on geometry achievement. Journal of Educational Research, 101 (3), 148–157. https://doi.org/10.3200/JOER.101.3.148-157

Harris, J., Hirsh-Pasek, K., & Newcombe, N. S. (2013). A new twist on studying the development of dynamic spatial transformations: Mental paper folding in young children. Mind, Brain, and Education, 7 (1), 49–55. https://doi.org/10.1111/mbe.12007

Hawes, Z., & Ansari, D. (2020). What explains the relationship between spatial and mathematical skills? A review of the evidence from brain and behavior. Psychonomic Bulletin & Review, 27 , 465–482. https://doi.org/10.3758/s13423-019-01694-7

Hawes, Z. C. K., Moss, J., Caswell, B., Naqvi, S., & MacKinnon, S. (2017). Enhancing children’s spatial and numerical skills through a dynamic spatial approach to early geometry instruction: Effects of a 32-week intervention. Cognition and Instruction, 35 , 236–264.

Hawes, Z., Moss, J., Caswell, B., & Poliszczuk, D. (2015). Effects of mental rotation training on children’s spatial and mathematics performance: A randomized controlled study. Trends in Neuroscience and Education, 4 (3), 60–68. https://doi.org/10.1016/j.tine.2015.05.001

Hegarty, M. (2018). Ability and sex differences in spatial thinking: What does the mental rotation test really measure? Psychonomic Bulletin & Review, 25 , 1212–1219.

Hegarty, M., & Kozhevnikov, M. (1999). Types of visual-spatial representations and mathematical problem solving. Journal of Educational Psychology, 91 (4), 684–689. https://doi.org/10.1037/0022-0663.91.4.684

Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial layout learning. Intelligence, 34 (2), 151–176. https://doi.org/10.1016/j.intell.2005.09.005

Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence, 30 , 425–447. https://doi.org/10.1016/S0160-2896(02)00116-2

Hegarty, M., & Waller, D. (2004). A dissociation between mental rotation and perspective-taking spatial abilities. Intelligence, 32 (2), 175–191. https://doi.org/10.1016/j.intell.2003.12.001

Hegarty, M. & Waller, D. A. (2005). Individual differences in spatial abilities. In P. Shah, & A. Miyake (Eds.), The Cambridge handbook of visuospatial thinking (pp. 121–169). Cambridge University Press. https://doi.org/10.1017/CBO9780511610448.005

Hubbard, E. M., Piazza, M., Pinel, P., & Dehaene, S. (2005). Interactions between number and space in parietal cortex. Nature Reviews Neuroscience, 6 (6), 435–448. https://doi.org/10.1038/nrn1684

Huttenlocher, J., & Presson, C. C. (1979). The coding and transformation of spatial information. Cognitive Psychology, 11 (3), 375–394. https://doi.org/10.1016/0010-0285(79)90017-3

Inhelder, B. & Piaget, J. (1958). The growth of logical thinking: From childhood to adolescence. Basic Books. https://doi.org/10.1037/10034-000

Izard, V., Pica, P., Spelke, E. S., & Dehaene, S. (2011). Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 108 (24), 9782–9787. https://doi.org/10.1073/pnas.1016686108

Jansen, P. (2009). The dissociation of small-and large-scale spatial abilities in school-age children. Perceptual and Motor Skills, 109 (2), 357–361.

Jolicoeur, P., Regehr, S., Smith, L. B. J. P., & Smith, G. N. (1985). Mental rotation of representations of two-dimensional and three-dimensional objects. Canadian Journal of Psychology, 39 (1), 100–129.

Just, M. A., & Carpenter, P. A. (1985). Cognitive coordinate systems: Accounts of mental rotation and individual differences in spatial ability. Psychological Review, 92 (2), 137.

Karp, S. A., & Konstadt, N. L. (1963). Manual for the children’s embedded figures test . Cognitive Tests.

Kosslyn, S. M., Koenig, O., Barrett, A., Cave, C. B., Tang, J., & Gabrieli, J. D. E. (1989). Evidence for two types of spatial representations: Hemispheric specialization for categorical and coordinate relations. Journal of Experimental Psychology: Human Perception and Performance, 15 (4), 723–735. https://doi.org/10.1037/0096-1523.15.4.723

Kosslyn, S. M., & Thompson, W. L. (2003). When is early visual cortex activated during visual mental imagery? Psychological Bulletin, 129 (5), 723–746. https://doi.org/10.1037/0033-2909.129.5.723

Kozhevnikov, M., & Hegarty, M. (2001). A dissociation between object manipulation spatial ability and spatial orientation ability. Memory & Cognition, 29 (5), 745–756. https://doi.org/10.3758/BF03200477

Kozhevnikov, M., & Thornton, R. (2006). Real-time data display, spatial visualization ability, and learning force and motion concepts. Journal of Science Education and Technology, 15 (1), 111–132. https://doi.org/10.1007/s10956-006-0361-0

Krüger, M. (2018). Mental rotation and the human body: Children’s inflexible use of embodiment mirrors that of adults. British Journal of Developmental Psychology, 23 (3), 418–437. https://doi.org/10.1111/bjdp.12228

Krüger, M., Kaiser, M., Mahler, K., Bartels, W., & Krist, H. (2013). Analogue mental transformations in 3-year-olds: Introducing a new mental rotation paradigm suitable for young children. Infant and Child Development, 23 , 123–138. https://doi.org/10.1002/icd.1815

Kyritsis, M., & Gulliver, S.R. (2009). Gilford Zimmerman orientation survey: A validation. 2009 7th International Conference on Information, Communications and Signal Processing (ICICS) (pp. 1-4).

Kyttälä, M., & Lehto, J. E. (2008). Some factors underlying mathematical performance: The role of visuospatial working memory and non-verbal intelligence. European Journal of Psychology of Education, 23 (1), 77–94. https://doi.org/10.1007/BF03173141

Laski, E. V., Casey, B. M., Yu, Q., Dulaney, A., Heyman, M., & Dearing, E. (2013). Spatial skills as a predictor of first grade girls’ use of higher level arithmetic strategies. Learning and Individual Differences, 23 , 123–130. https://doi.org/10.1016/j.lindif.2012.08.001

Lee, S. A., Sovrano, V. A., & Spelke, E. S. (2012). Navigation as a source of geometric knowledge: Young children’s use of length, angle, distance, and direction in a reorientation task. Cognition, 1 , 144–161. https://doi.org/10.1016/j.cognition.2011.12.015

Linn, M., & Petersen, A. C. (1985). Emergence and characterization of sex differences in spatial ability: A meta-analysis. Child Development, 56 (6), 1479–1498. https://doi.org/10.2307/1130467

Lohman, D. F. (1979).  Spatial ability: A review and re-analysis of the correlational literature (Technical Report No. 8). Stanford, CA: Aptitudes Research Project, School of Education, Stanford University.

Lohman, D. F. (1988). Spatial abilities as traits, processes, and knowledge. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 181–248). Lawrence Erlbaum.

Lombardi, C. M., Casey, B. M., Pezaris, E., Shadmehr, M., & Jong, M. (2019). Longitudinal analysis of associations between 3-d mental rotation and mathematics reasoning skills during middle school: Across and within genders. Journal of Cognition and Development, 20 (4), 487–509. https://doi.org/10.1080/15248372.2019.1614592

Lowrie, T., & Logan, T. (2023). Spatial visualization supports students’ math: Mechanisms for spatial transfer. Journal of Intelligence, 11 (6), 127. https://doi.org/10.3390/jintelligence11060127

Lowrie, T., Logan, T., & Hegarty, M. (2019). The influence of spatial visualization training on students’ spatial reasoning and mathematics performance. Journal of Cognition and Development, 20 (5), 729–751. https://doi.org/10.1080/15248372.2019.1653298

Lowrie, T., Resnick, I., Harris, D., & Logan, T. (2020). In search of the mechanisms that enable transfer from spatial reasoning to mathematics understanding. Mathematics Educational Research Journal, 32 , 175–188. https://doi.org/10.1007/s13394-020-00336-9

Lütke, N., & Lange-Küttner, C. (2015). Keeping it in three dimensions: Measuring the development of mental rotation in children with the Rotated Colour Cube Test (RCCT). International Journal of Developmental Science, 9 (2), 95–114. https://doi.org/10.3233/DEV-14154

Maeda, Y., & Yoon, S. Y. (2013). A meta-analysis on gender differences in mental rotation ability measured by the Purdue spatial visualization tests: Visualization of rotations (PSVT: R). Educational Psychology Review, 25 (1), 69–94. https://doi.org/10.1007/s10648-012-9215-x

Malanchini, M., Rimfeld, K., Shakeshaft, N. G., McMilan, A., Schofield, K. L., Rodic, M., Rossi, V., Kovas, Y., Dale, P. S., Tucker-Drob, E. M., & Plomin, R. (2020). Evidence for a unitary structure of spatial cognition beyond general intelligence. Science of Learning, 5 (1), 9. https://doi.org/10.1038/s41539-020-0067-8

McGee, M. (1979). Human spatial abilities: Psychometric studies and environmental, genetic, hormonal, and neurological influences. Psychological Bulletin, 86 (5), 889–918. https://doi.org/10.1037/0033-2909.86.5.889

Michael, W. B., Guilford, J. P., Fruchter, B., & Zimmerman, W. S. (1957). The description of spatial-visualization abilities. Educational and Psychological Measurement, 17 , 185–199. https://doi.org/10.1177/001316445701700202

Mix, K. S., & Cheng, Y. L. (2012). The relation between space and math: Developmental and educational implications. Advances in Child Development and Behavior, 42 , 197–243. https://doi.org/10.1016/b978-0-12-394388-0.00006-x

Mix, K. S., Hambrick, D. Z., Satyam, V. R., Burgoyne, A. P., & Levine, S. C. (2018). The latent structure of spatial skill: A test of the 2 x 2 typology. Cognition, 180 , 268–278. https://doi.org/10.1016/j.cognition.2018.07.012

Mix, K. S., Levine, S. C., Cheng, Y. L., Stockton, J. D., & Bower, C. (2021). Effects of spatial training on mathematics in first and sixth grade children. Journal of Educational Psychology, 113 (2), 304–314. https://doi.org/10.1037/edu0000494

Mix, K. S., Levine, S. C., Cheng, Y. L., Young, C., Hambrick, D. Z., Ping, R., & Konstantopoulos, S. (2016). Separate but correlated: The latent structure of space and mathematics across development. Journal of Experimental Psychology: General, 145 (9), 1206–1227. https://doi.org/10.1037/xge0000182

Moreau, D., & Wiebels, K. (2021). Assessing change in intervention research: The benefits of composite outcomes. Advances in Methods and Practice in Psychological Science, 4 (1), 2515245920931930. https://doi.org/10.1177/2515245920931930

Newcombe, N. S. (2013). Seeing relationships: Using spatial thinking to teach science, mathematics, and social studies. American Educator, 37 , 26–40.

Newcombe, N. S., Ratliff, K. R., Shallcross, W. L., & Twyman, A. D. (2009). Young children’s use of features to reorient is more than just associative: Further evidence against a modular view of spatial processing. Developmental Science, 13 (1), 213–220. https://doi.org/10.1111/j.1467-7687.2009.00877.x

Newcombe, N. S. & Shipley, T. F. (2015). Thinking about spatial thinking: new typology, new assignments. In J. S. Gero (Ed), Studying Visual and Spatial Reasoning for Design Creativity (pp. 179–192). Springer, Dordrecht . https://doi.org/10.1007/978-94-017-9297-4_10

Okamoto, Y., Kotsopoulos, D., McGarvey, L., & Hallowell, D. (2015). The development of spatial reasoning in young children. In B. Davis (Ed.), Spatial reasoning in the early years: Principles, assertions, and speculations (pp. 25–38). Routledge.

Oltman, P. K., Raskin, E., & Witkin, H. A. (1971). Group embedded figure test . Consulting Psychologists Press.

Oostermeijer, M., Boonen, A. J. H., & Jolles, J. (2014). The relation between children’s constructive play activities, spatial ability, and mathematical work problem-solving performance: A mediation analysis in sixth-grade students. Frontiers in Psychology, 5 , 782. https://doi.org/10.3389/fpsyg.2014.00782

Piaget, J., & Inhelder, B. (1956). The child’s conception of space. London: Routledge & Kegan Paul.

Potter, L. E. (1995). Small-scale versus large-scale spatial reasoning: Educational implications for children who are visually impaired. Journal of Visual Impairment & Blindness, 89 (2), 142–152.

National Research Council. (2006). Learning to think spatially . Washington, DC: The National Academies Press. https://doi.org/10.17226/11019

Quaiser-Pohl, C. (2003). The mental cutting test “Schnitte” and the Picture Rotation Test – Two new measures to assess spatial ability. International Journal of Testing, 3 (3), 219–231. https://doi.org/10.1207/S15327574IJT0303_2

Ramful, A., Ho, S. Y., & Lowrie, T. (2015). Visual and analytical strategies in spatial visualisation: Perspectives from bilateral symmetry and reflection. Mathematics Education Research Journal, 27 , 443–470. https://doi.org/10.1007/s13394-015-0144-0

Ramful, A., Lowrie, T., & Logan, T. (2017). Measurement of spatial ability: Construction and validation of the Spatial Reasoning Instrument for Middle School Students. Journal of Psychoeducational Assessment, 35 (7), 709–727. https://doi.org/10.1177/0734282916659207

Reuhkala, M. (2001). Mathematical skills in ninth-graders: Relationship with visuo-spatial abilities and working memory. Educational Psychology, 21 (4), 387–399. https://doi.org/10.1080/01443410120090786

Richardson, J. T. E. (1994). Gender differences in mental rotation. Perceptual and Motor Skills, 78 (2), 435–448. https://doi.org/10.2466/pms.1994.78.2.435

Rimfeld, K., Shakeshaft, N. G., Malanchini, M., Rodic, M., Selzam, S., Schofield, K., Dale, P. S., Kovas, Y., & Plomin, R. (2017). Phenotypic and genetic evidence for a unifactorial structure of spatial abilities. Proceedings of the National Academy of Sciences, 114 (10), 2777–2782. https://doi.org/10.1073/pnas.1607883114

Rutherford, T., Karamarkovich, S. M., & Lee, D. S. (2018). Is the spatial/math connection unique? Associations between mental rotation and elementary mathematics and English achievement. Learning and Individual Differences, 62 , 180–199. https://doi.org/10.1016/j.lindif.2018.01.014

Schenck, K. E., Kim, D., Swart, M. I., & Nathan, M. J. (2022, April). With no universal consensus, spatial system perspective affects model fitting and interpretation for mathematics . [Paper Presentation]. American Educational Research Association Conference, San Diego, CA.

Schenck, K. E. & Nathan, M. J. (2020, April). Connecting mathematics, spatial ability, and spatial anxiety . [Paper Presentation]. American Educational Research Association Conference, San Francisco, CA.

Schneider, J., & McGrew, K. (2012). The Cattell–Horn–Carroll model of intelligence. In D. Flanagan & P. Harrison (Eds .), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 99–144). Guilford Press. https://doi.org/10.1177/0734282916651360

Schultz, K. (1991). The contribution of solution strategy to spatial performance. Canadian Journal of Psychology, 45 (4), 474–491. https://doi.org/10.1037/h0084301

Shah, P., & Miyake, A. (1996). The separability of working memory resources for spatial thinking and language processing: An individual differences approach. Journal of Experimental Psychology: General, 125 (1), 4–27. https://doi.org/10.1037/0096-3445.125.1.4

Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial ability in intellectually talented young adolescents: A 20-year longitudinal study. Journal of Educational Psychology, 93 (3), 604–614. https://doi.org/10.1037/0022-0663.93.3.604

Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171 (3972), 701–703. https://doi.org/10.1126/science.171.3972.701

Sorby, S. (1999). Developing 3-D spatial visualization skills. Engineering Design Graphics Journal, 63 (2), 21–32.

Sorby, S. A. (2009). Educational research in developing 3-D spatial skills for engineering students. International Journal of Science Education, 31 (3), 459–480. https://doi.org/10.1080/09500690802595839

Sorby, S., Casey, B., Veurink, N., & Dulaney, A. (2013). The role of spatial training in improving spatial and calculus performance in engineering students. Learning and Individual Differences, 26 , 20–29. https://doi.org/10.1016/j.lindif.2013.03.010

Sorby, S. A., & Panther, G. C. (2020). Is the key to better PISA math scores improving spatial skills? Mathematics Education Research Journal, 32 (2), 213–233. https://doi.org/10.1007/s13394-020-00328-9

Spearman, C. (1927). The abilities of man, their nature and measurement . Macmillan.

Stieff, M. (2007). Mental rotation and diagrammatic reasoning in science. Learning and Instruction, 17 (2), 219–234.

Stieff, M., & Uttal, D. (2015). How much can spatial training improve STEM achievement? Educational Psychology Review, 27 (4), 607–615.

Tam, Y. P., Wong, T. T., & Chan, W. W. L. (2019). The relation between spatial skills and mathematical abilities: The mediating role of mental number line representation. Contemporary Educational Psychology, 56 , 14–24. https://doi.org/10.1016/j.cedpsych.2018.10.007

Tartre, L. A. (1990). Spatial orientation skill and mathematical problem solving. Journal for Research in Mathematics Education, 21 (3), 216–229.

Thompson, J. M., Nurek, H. C., Moeller, K., & Kardosh, R. C. (2013). The link between mental rotation ability and basic numerical representations. Acta Psychologica, 144 , 324–331. https://doi.org/10.1016/j.actpsy.2013.05.009

Thurstone, L. L. (1938). Primary mental abilities . Chicago University Press.

Thurstone, L. L. (1950). Some primary abilities in visual thinking. Proceedings of the American Philosophical Society, 94 (6), 517–521.

Tufte, E. R. (2001). The visual display of quantitative information (2nd ed.). Graphics Press.

Tversky, B. (2019). Transforming thought. In Mind in motion (pp. 85 – 106). Basic Books.

Uttal, D. H., & Cohen, C. A. (2012). Spatial thinking and STEM education: When, why, and how? In B. Ross (Ed.), Psychology of learning and motivation (Vol. 57, pp. 147–181). Academic Press. https://doi.org/10.1016/B978-0-12-394293-7.00004-2

Uttal, D. H., McKee, K., Simms, N., Hegarty, M., & Newcombe, N. S. (2024). How can we best assess spatial skills? Practical and conceptual challenges. Journal of Intelligence, 12 (1), 8.

Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N. S. (2013). The malleability of spatial skills: A meta-analysis of training studies. Psychological Bulletin, 139 (2), 352–402. https://doi.org/10.1037/a0028446

Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47 , 599–604. https://doi.org/10.2466/pms.1978.47.2.599

Verdine, B. N., Golinkoff, R. M., Hirsh-Pasek, K., Newcombe, N. S., Filipowicz, A. T., & Chang, A. (2014). Deconstructing building blocks: Preschoolers’ spatial assembly performance relates to early mathematical skills. Child Development, 85 (3), 1062–1076. https://doi.org/10.1111/cdev.12165

Voyer, D., Voyer, S., & Bryden, M. P. (1995). Magnitude of sex differences in spatial abilities: A meta-analysis and consideration of critical variables. Psychological Bulletin, 117 (2), 250–270. https://doi.org/10.1037/0033-2909.117.2.250

Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of Educational Psychology, 101 (4), 817. https://doi.org/10.1037/a0016127

Walsh, V. (2003). A theory of magnitude: Common cortical metrics of time, space, and quantity. Trends in Cognitive Sciences, 7 , 483–488. https://doi.org/10.1016/j.tics.2003.09.002

Wang, L., Cohen, A. S., & Carr, M. (2014). Spatial ability at two scales of representation: A meta-analysis. Learning and Individual Differences, 36 , 140–144. https://doi.org/10.1016/j.lindif.2014.10.006

Weisberg, S. M., Schinazi, V. R., Newcombe, N. S., Shipley, T. F., & Epstein, R. A. (2014). Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40 (3), 669–682. https://doi.org/10.1037/a0035261

Witkin, H. A. (1950). Individual differences in ease of perception of embedded figures. Journal of Personality, 19 , 1–15. https://doi.org/10.1111/j.1467-6494.1950.tb01084.x

Witkin, H. A., Moore, C. A., Goodenough, D. R., & Cox, P. W. (1977). Field-dependent and field-independent cognitive styles and their educational implications. Review of Educational Research, 47 (1), 1–64. https://doi.org/10.3102/00346543047001001

Witkin, H. A., Oltman, P. K., Raskin, E., & Karp, S. A. (1971). A manual for the embedded figures test. Consulting Psychologist Press . https://doi.org/10.1007/978-0-387-79948-3_1361

Wolfgang, C., Stannard, L., & Jones, I. (2003). Advanced constructional play with LEGOs among preschoolers as a predictor of later school achievement in mathematics. Early Child Development and Care, 173 (5), 467–475. https://doi.org/10.1080/0300443032000088212

Wraga, M., Creem, S. H., & Profitt, D. R. (2000). Updating displays after imagined object and viewer rotations. Journal of Experimental Psychology: Learning, Memory, & Cognition, 26 (1), 151–168. https://doi.org/10.1037/0278-7393.26.1.151

Xie, F., Zhang, L., Chen, X., & Xin, Z. (2020). Is spatial ability related to mathematical ability: A meta-analysis. Educational Psychology Review, 32 , 113–155.

Yilmaz, B. (2009). On the development and measurement of spatial ability. International Electronic Journal of Elementary Education, 1 (2), 1–14.

Young, C. J., Levine, S. C., & Mix, K. S. (2018). The connection between spatial and mathematical ability across development. Frontiers in Psychology, 9 , 775. https://doi.org/10.3389/fpsyg.2018.00755

Young, C. J., Levine, S. C., & Mix, K. S. (2018). What processes underlie the relation between spatial skill and mathematics? In K. S. Mix & M. T. Battista (Eds.), Visualizing Mathematics. Research in Mathematics Education (pp. 195–228). Springer, Cham. https://doi.org/10.1007/978-3-319-98767-5_10

Zacks, J. M., Mires, J., Tversky, B., & Hazeltine, E. (2000). Mental spatial transformations of objects and perspective. Spatial Cognition and Computation, 2 , 315–332. https://doi.org/10.1023/A:1015584100204

Zacks, J., Rypma, B., Gabrieli, J. D. E., Tversky, B., & Glover, G. H. (1999). Imagined transformations of bodies: An fMRI investigation. Neuropsychologia, 37 (9), 1029–1040. https://doi.org/10.1016/S0028-3932(99)00012-3

Zhang, X., & Lin, D. (2015). Pathways to arithmetic: The role of visual-spatial and language skills in written arithmetic, arithmetic word problems, and nonsymbolic arithmetic. Contemporary Educational Psychology, 41 , 188–197. https://doi.org/10.1016/j.cedpsych.2015.01.005

Zhong, J. Y., & Kozhevnikov, M. (2016). Relating allocentric and egocentric survey-based representations to the self-reported use of a navigation strategy of egocentric spatial updating. Journal of Environmental Psychology, 46 , 154–175. https://doi.org/10.1016/j.jenvp.2016.04.0

Download references

Acknowledgements

We thank Dr. Martha W. Alibali and Dr. Edward M. Hubbard for their extensive and valuable feedback as part of the preliminary examination committee. We also thank Dr. Michael I. Swart for his feedback on initial and subsequent drafts and for lending graphic design knowledge. Last, we thank Dr. Mary Hegarty and Monica Mendoza for their feedback on the initial drafts of this work.

Open access funding provided by SCELC, Statewide California Electronic Library Consortium No funding, grants, or other support was received for the submitted work.

Author information

Authors and affiliations.

Department of Teaching and Learning, Southern Methodist University, Dallas, TX, USA

Kelsey E. Schenck

Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI, USA

Mitchell J. Nathan

You can also search for this author in PubMed   Google Scholar

Contributions

This work is primarily based on Kelsey E. Schenck’s preliminary examination thesis working under her advisor, Mitchell J. Nathan. The idea for the article was Kelsey E. Schenck’s under the guidance of Mitchell J. Nathan. Kelsey E. Schenck performed the initial literature search and drafted the initial work. Mitchell J. Nathan critically revised and contributed to subsequent drafts.

Corresponding author

Correspondence to Kelsey E. Schenck .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Schenck, K.E., Nathan, M.J. Navigating Spatial Ability for Mathematics Education: a Review and Roadmap. Educ Psychol Rev 36 , 90 (2024). https://doi.org/10.1007/s10648-024-09935-5

Download citation

Accepted : 05 August 2024

Published : 17 August 2024

DOI : https://doi.org/10.1007/s10648-024-09935-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Spatial Ability
  • Mathematics Education
  • Student Cognition
  • Factor Analysis
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. SOLUTION: Problem solving ability aptitude worksheet with answer key

    problem solving aptitude test examples

  2. Aptitude Tests: 500 Practice Questions & Answers

    problem solving aptitude test examples

  3. Online Reasoning Test : Part Of Aptitude Test, Analytical Skill, IQ Test

    problem solving aptitude test examples

  4. SOLUTION: Problem solving ability aptitude worksheet with answer key

    problem solving aptitude test examples

  5. Problems on H.C.F and L.C.M

    problem solving aptitude test examples

  6. Aptitude Tests Explained

    problem solving aptitude test examples

COMMENTS

  1. Aptitude Test: 16 Free Tests with 129+ Questions and Answers

    Gain access to the best free aptitude tests online. Practice more than 100 questions with answers for 16 aptitude assessments with expert tips.

  2. Free Sample Aptitude Test Questions & Answers (2024)

    Free Aptitude Test Sample Questions and Answers An aptitude test is a psychometric assessment that is used to measure a candidate's cognitive skills or behaviour in workplace scenarios. Aptitude tests consist of different types of questions and are frequently used by companies to improve the quality of their hiring.

  3. Free Aptitude Test: Questions and Answers

    Check out our online guide that offers examples and samples of aptitude tests for jobs with detailed analysis and explained answers. Maximize your score!

  4. Aptitude Test: 1000s Of Free Practice Aptitude Test Questions

    An aptitude test is a way to measure a job candidate's cognitive abilities, work behaviours, or personality traits. Aptitude tests will examine your numeracy, logic and problem-solving skills, as well as how you deal with work situations. They are a proven method to assess employability skills. Aptitude tests measure a range of skills such as ...

  5. 12 Effective Aptitude Test Questions & Answers

    Aptitude tests enable quick and objective candidate screening. Find our free assessment test questions and top tips in this handy guide to aptitude testing.

  6. 17 Free Practice Aptitude Tests: Try A Free Test Now

    Practice different types of aptitude test for free, including numerical and verbal tests. All questions come with worked solutions to help you improve.

  7. Aptitude Test Preparation: Free Practice & Tips

    Learn & prepare for aptitude tests with free sample questions with detailed explanations for employment tests.

  8. Sample Questions to Help You Prepare for an Aptitude Test

    Aptitude tests can feel daunting, so we've put together some sample questions to help you prepare and practice before the real test.

  9. Free Aptitude Test Practice Questions 2024

    Aptitude Test Practice The main challenge with aptitude questions arises from the tight time limit and the diversity of skills tested, including abstract reasoning, numerical ability, verbal comprehension, problem-solving, and more.

  10. Free Aptitude Practice Questions with Explanations

    On assessment-training.com you can practice several free assessment tests and aptitude tests with explanations. Some examples of our free assessment tests and free aptitude tests that you can practice are: - Raven's Progressive Matrices Test. - SHL Assessment.

  11. Aptitude Tests: Test Types & Free Practice Materials (2024)

    Let's dive deep into aptitude tests: definition, different types, and free practice materials for this world-famous assessment tool!

  12. Logical Reasoning Tests, Free Online Practice Tests (2024)

    What is a logical reasoning test? A logical reasoning test is used measure a candidate's problem solving ability. They assess the ability to come to conclusions based on logic. You are presented with a series of shapes and are required to find patterns and rules to help you find the correct answer. These tests may be encountered for any position at any level of recruitment, but they may be ...

  13. McKinsey Problem Solving Test Practice Test A

    Practice Test A. TestPractice Test Overview and InstructionsThis practice test has been developed to provide a sample of the actual McKinsey Pro. em Solving Test used for selection purposes. This test assesses your ability to solve business problems using ded. tive, inductive, and quantitative reasoning. This p.

  14. Critical Thinking Test: Free Practice Questions

    Take our free critical thinking test with answers and full explanations to help you improve your performance at interview.

  15. Practice Logical Reasoning Test Example Questions

    Free Example Questions. One of the most popular, and perhaps most dreaded, type of psychometric test is the logical reasoning test. These screening questions won't ask you for formulas or equations. You'll have to rely solely on your own ingenuity to solve these problems. You'll need a great deal of concentration to succeed on a logic test.

  16. Logical Reasoning Test: 100s Of Free Practice Questions (2024)

    Logical reasoning tests are a type of psychometric test used to measure your problem-solving skills. They come in various forms, but all have the underlying purpose of assessing your logical aptitude and your ability to draw conclusions from a given set of information.

  17. Aptitude Test: Examples, Types, and Uses

    Here are a few examples of common aptitude tests: An aptitude test assessing an individual's potential for becoming a fighter pilot. A career test evaluating a person's capability to work as an air traffic controller. An aptitude test given to high school students to determine which type of careers they might be good at.

  18. Cognitive Ability Test

    The Universal Cognitive Aptitude Test (UCAT) is a job application assessment that measures your critical thinking, problem-solving, analytical, and mathematical abilities.

  19. Aptitude Questions and Answers

    Our comprehensive guide to aptitude questions and answers covers all the essential topics of Aptitude, including Quantitative Aptitude, Logical Reasoning, and Verbal Ability. Whether you're a student preparing for an examination or looking for a job to improve your problem-solving skills. With our step-by-step guide and sample questions, you ...

  20. Aptitude Test

    The Aptitude test assesses the ability to use reason to solve problems which involve rigorous and methodical thinking skills. The assessment includes work-sample tasks such as: Understanding numerical data in order to calculate accurate answers. Analyzing patterns in information and evidence to arrive at correct conclusions.

  21. Cognitive Ability Test: Practice Questions (2024)

    A cognitive test is an assessment tool designed to measure an individual's cognitive abilities, which are the mental processes involved in acquiring, processing, storing and using information. Cognitive assessments are used to evaluate various aspects of cognitive functioning, including memory, attention, problem-solving, reasoning, language ...

  22. Problem-Solving Aptitude Test Questions 2024

    Problem-Solving placement aptitude test is a popular pre-employment cognitive ability assessment. To make better hiring decisions, companies conducts a Problem-Solving test to predict candidates' competency level and likelihood of success in a job role. A company uses Problem-Solving tests during the recruitment process to compare applicants.

  23. Aptitude Questions and Answers

    Here you can find Aptitude interview questions and answers for your placement interviews and entrance exam preparation. Learn and practise solving Aptitude questions to enhance your skills so that you can clear interviews, competitive examinations, and various entrance tests (CAT, GATE, GRE, MAT, bank exams, railway exams, etc.) with full ...

  24. The 7 Best Online Employment Aptitude Tests

    What is an aptitude test for a job? An aptitude test for employment is a method of evaluating a candidate's skills, abilities, beliefs, personality traits, and other aspects to predict potential for success in the role. A combination of online aptitude tests is the most reliable way to screen candidates.

  25. Mastering IELTS Speaking: Describe A Time When You Had To Use Your

    In the IELTS Speaking test, a common topic revolves around describing situations where you had to use your problem-solving skills. This article provides a comprehensive guide to tackling this question, ensuring you achieve the highest possible band score.

  26. Navigating Spatial Ability for Mathematics Education: a Review and

    However, these lines of work are complicated by the variety of problem-solving strategies students employ when solving mathematics problems and issues with generalizability. While some students may rely on a specific spatial ability to solve a particular mathematics problem, others may use non-spatial approaches or apply spatial thinking ...