Multi-thread Programming: Basic Concepts

Publié le - Dernière modification le

Has your system admin ever told you to boost your computer’s performance? If  you’re no guru with computers you probably wondered why. Whether you are involved in proofreading, article writing, or internet marketing, you need access to a computer that’s performance is more than just capable. In the past, the micro processing unit was used to indicate a computer’s capability. This has changed over the past few years. Some of the reasons why, are:

  • A notable expansion in the energy requirements.

  • Hitches in dispersing the heat emitted by a microprocessor.

  • Leaks brought about by the manufacture of increasingly smaller transistors.

Owing to the needs of current applications, manufacturers are increasing the cores in a computer’s microprocessor (or CPU). What does this mean? A single core used to be sufficient to meet the computer's needs, but manufacturers now fit a number of cores working hand in hand. As a result, we see phrases such as 'dual-core' or 'quad-core' to describe CPUs. The cores work at a slower pace, but side-by-side.

The idea of 5 or 8 cores on your computer sounds great, right? Unfortunately, your computer having 8 cores does not necessarily mean the performance will improve 8 times. There are certain tasks that cannot provide double results by simply setting them on different cores to run simultaneously. This is a process that calls for running processes in series.

Read through this article to understand how to make the most out of multithreaded programming.

Principal Concepts  

We will look at the two principal methods of maximizing a computer’s performance using multithreaded programming. They are:

  • Task Parallelism, which incorporates multiple instructions and data (also known as MMID).

  • Data Parallelism, which involves single instruction and multiple data, or SIMD.

Task Parallelism

Task parallelism involves distributing the execution of unrelated tasks over different cores so there is side-side performance of the tasks.

For instance, you are going about your editing work as you surf the web, while you stream music through your computer. Let’s say your computer has two or three cores, so the chances are two different cores are performing the two tasks.

Theoretically, your computer can perform tasks equating to the number of CPUs. This is quite useful. As such, to be able to manage all the assigned tasks, your operating system has to employ a bit of effort to seamlessly re-examine the priority of each task, oversee the time assigned to each one, handle any interferences in their execution, supervise competing access to the system’s limited resources. The operating system effort also oversees the capability gains of parallelism. The operating system acts as your system admin.

It is possible to allocate tasks within one application. For instance, a thread can supervise graphics and user-interface interactions, as another saves a document that is being worked on, all within one application. These roles will often be completely independent of one another. Nonetheless, in some instances the two actions call for linking, which requires synchronization. This linking helps avoid a race condition. In such a scenario, you will have to make shutting down the application before closing impossible. This is to prevent any data loss.

Data Parallelism: Single Instruction Multiple Data-SMID

This method involves assigning a number of statistics over different cores so they can do similar calculations at one time, each handling its own set of data. Below are a few examples.

Example 1:

Using a processor that is fitted with four cores, you are required to tell the average performance of, say, 300 learners. The students subdivide into four equal groups of 75. In such a case, each of the four cores will calculate 75 students in a parallel manner. This improves the performance of the computer by up to four times compared to a computer that is running on a single-core microprocessor.

That’s simple! Right? Well, let’s dig a little deeper and see why it is not necessarily as easy as it sounds.

Example 2:

The second task requires you to determine the percentile rankings of each student. In simple terms, percentile ranking is the percentage reflections of scores that are equivalent or below the individual student's score. This problem can be quite challenging. This is because we must have the values for all students to be able to determine the percentile ranking of each student.

Judging by the first example, separation of tasks are easy to understand. This is because calculating the average only requires the performance results of a single student.

In the second example, separating all the tasks is not so simple. We probably have algorithms to maximize parallelism. To make it easy to understand, below is a simple breakdown:

  • Determine the mean performance of individual students by following the first example.

  • Sort the average results using a sorting algorithm.

  • Distribute individual percentiles ranking calculations into four groups (each group of 75) in this case for making the provisions below available in a format that is read-only all other groups.

  • The mean of individual students in all sub-groups of 75.

  • The complete ranked lists of average performances for all 300 learners.

As shown above, there are several steps in the calculation that create the need for side-by-side performances. Each of the above steps is set to be in a sequence and in a given order. What does this mean? It means that if one core finishes calculating the average before the others, it cannot proceed to the next level of calculating the percentile until all the other cores have finished. If one tries moving on to the calculation of the percentile using the result of just one group, then a concurrency hindrance known as 'race condition' would occur. The results of these determinations would differ from one step to the next, and would follow in the order in which the cores finished their first calculation.

This execution is a serial component algorithm. It is an execution that hinders the performance benefits of parallelism.

Multithreaded Programming Ceilings

As seen above, it may sound okay to state that if we increase the cores at work, then we speed up the performance by a number of times equal to the number of cores. In reality, things are very different.

According to the speedup of the algorithm, if you have 70% which is executable in parallel, then the highest benefit that is derivable from distributing it among 8 cores is 2.58 times faster when compared with one core. This shows the importance of making full use and re-arranging the algorithm so as to minimize the section of the algorithm that cannot be re-organized.

Amdahl’s law states that the acceleration gained by boosting the number of cores at use in a task does have some limits. This means that at a certain point, the addition of cores no longer causes an improvement on a process.


It is possible to make the most of multiple cores only if we know how the whole system works. It is not enough to have multiple cores in your computer system. It is their work sequence that provides results. If you need any assistance on any computer task, find it here. We have a pool of qualified resources at your disposal. Be on the lookout for more articles from us that will highlight the major challenges in multi-thread programming. We will also provide you with information on how to maintain these applications.

Have any thoughts or ideas you would like to share? You are welcome to leave them in the comments section below.


Publié 18 novembre, 2017


Software Developer

Lucy is the Development & Programming Correspondent for She is currently based in Sydney.

Article suivant

What Is Technical Debt, And How Do We Control It?