A Guide to Parallel Execution in C# ASP.NET: Harness the Power of the Task Parallel Library (TPL)

Mohamed Hendawy
8 min readJul 24, 2023

--

Introduction:

As software developers, we are often faced with the challenge of optimizing the performance of our applications to take advantage of modern multi-core processors. One powerful tool at our disposal is the Task Parallel Library (TPL) in C#, introduced in .NET Framework 4.0 and later versions. TPL simplifies parallel programming by providing a higher-level abstraction for working with tasks, making it easier to execute asynchronous and parallel operations efficiently.

In this article, we will explore the key components of the Task Parallel Library and demonstrate how to achieve parallel execution in C# using various examples.

The Task Parallel Library (TPL):

The Task Parallel Library enable developers to implement parallel and asynchronous operations seamlessly. It revolves around the concept of tasks, which represent units of work that can be executed concurrently. The TPL abstracts away the complexity of thread management and provides built-in support for task scheduling, synchronization, and error handling.

Task and Task<TResult>:

The Task class represents an asynchronous operation that can run concurrently with other tasks. It is used for work that does not produce a result. On the other hand, the Task<TResult> class represents an asynchronous operation that returns a result of type TResult when it completes.

Example:

using System;
using System.Threading.Tasks;

public class Program
{
public static async Task Main()
{
// Task without a result
Task taskWithoutResult = Task.Run(() => DoWork());

// Task with a result
Task<int> taskWithResult = Task.Run(() => CalculateResult());

// Asynchronously wait for both tasks to complete
await taskWithoutResult;
int result = await taskWithResult;


Console.WriteLine($"Result: {result}");
}

public static void DoWork()
{
// Simulate some time-consuming work
Task.Delay(2000).Wait();
Console.WriteLine("Task without result completed.");
}

public static int CalculateResult()
{
// Simulate some computation
Task.Delay(1000).Wait();
Console.WriteLine("Task with result completed.");
return 42;

}
}

output:

Explanation:

  • In this example, we use the async/await keywords to ensure that both tasks (taskWithoutResult and taskWithResult) execute concurrently.
  • The Task.Run method is used to start the tasks, which queues them to the ThreadPool for execution.
  • The DoWork method simulates time-consuming work with a 2-second delay using Task.Delay(2000).
  • The CalculateResult method simulates some computation with a 1-second delay using Task.Delay(1000) and returns the result 42.
  • The await taskWithoutResult and await taskWithResult statements asynchronously wait for the tasks to complete, allowing other work to continue in the meantime.

Parallel Execution using Parallel and Parallel.ForEach:
The Parallel class provides methods to execute parallel loops and parallel tasks, effectively distributing work across multiple threads.

Example:

using System;
using System.Threading.Tasks;

public class Program
{
public static void Main()
{
// Parallel For loop
Parallel.For(0, 10, i =>
{
Console.WriteLine($"Task {i} started.");
Task.Delay(1000).Wait();
Console.WriteLine($"Task {i} completed.");
});

// Parallel ForEach loop
var data = new[] { "apple", "banana", "orange", "grape" };
Parallel.ForEach(data, item =>
{
Console.WriteLine($"Processing {item} on Thread {Task.CurrentId}");
Task.Delay(1000).Wait();
Console.WriteLine($"Processed {item} on Thread {Task.CurrentId}");
});
}
}

output:

Explanation:

  • In this example, we demonstrate how to utilize the Parallel.For and Parallel.ForEach methods to achieve parallel execution in loops.
  • The Parallel.For method executes the delegate action in parallel for the values from 0 to 9. Each task performs a time-consuming operation (1-second delay) and prints the corresponding messages.
  • The Parallel.ForEach method processes each item in the data array in parallel. Each task simulates a time-consuming operation (1-second delay) and prints the corresponding messages along with the thread ID using Task.CurrentId.

Is Parallel.ForEach always faster than normar ForEach?
NO, it depends on the specific scenario and the nature of the tasks being performed inside the loop. Let’s explore the factors that influence the performance of each approach:

  1. Task Parallelism: Parallel.ForEach executes the iterations of the loop concurrently using multiple threads, taking advantage of multi-core processors. This can significantly improve performance when the tasks inside the loop are computationally intensive or I/O-bound, as it allows them to run concurrently.
  2. Overhead: However, Parallel.ForEach introduces some overhead due to thread management, data partitioning, and synchronization. This overhead might be noticeable for very lightweight or short-lived tasks, where the benefits of parallelism might not outweigh the added complexity.
  3. Thread Safety: If the loop body modifies shared data or has side effects, it is essential to ensure proper synchronization in both regular and parallel loops to avoid race conditions. Parallel execution might require additional effort to handle shared data safely.
  4. Task Granularity: The size and granularity of the tasks play a role. If the loop body contains very fine-grained tasks, the overhead of creating and managing threads might outweigh the performance gains. In such cases, a regular foreach loop might be more efficient.
  5. Load Balancing: The TPL attempts to balance the workload among threads, but if the tasks have different execution times, it can lead to load imbalance and affect performance.

In summary, Parallel.ForEach can be faster than a regular foreach loop when:

  • The tasks inside the loop are computationally intensive or I/O-bound.
  • The loop body can be parallelized without introducing excessive overhead.
  • The tasks are well-balanced in terms of execution time.

On the other hand, a regular foreach loop might be faster when:

  • The tasks are very lightweight or have minimal execution time.
  • The loop body has shared data and requires explicit synchronization (handling thread safety).
  • The tasks cannot be efficiently parallelized due to granularity issues.

In practice, it’s essential to measure and profile the performance of both approaches for a specific scenario to determine which one is more suitable. Factors such as the number of iterations, the complexity of the tasks, the number of available cores, and the nature of the data being processed all play a role in deciding the optimal approach.

Example were foreach is faster than parallel.foreach:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;

public class Program
{
public static void Main()
{
List<int> numbers = new List<int>(Enumerable.Range(1, 1000000));

// Calculate sum of squares using a regular foreach loop
var stopwatchRegular = Stopwatch.StartNew();
long sumOfSquaresRegular = CalculateSumOfSquaresRegular(numbers);
stopwatchRegular.Stop();

// Calculate sum of squares using Parallel.ForEach
var stopwatchParallel = Stopwatch.StartNew();
long sumOfSquaresParallel = CalculateSumOfSquaresParallel(numbers);
stopwatchParallel.Stop();

Console.WriteLine($"Sum of squares (Regular): {sumOfSquaresRegular}");
Console.WriteLine($"Time consumed (Regular): {stopwatchRegular.Elapsed}");

Console.WriteLine($"Sum of squares (Parallel): {sumOfSquaresParallel}");
Console.WriteLine($"Time consumed (Parallel): {stopwatchParallel.Elapsed}");
}

public static long CalculateSumOfSquaresRegular(List<int> numbers)
{
long sum = 0;
foreach (var num in numbers)
{
sum += num * num;
}
return sum;
}

public static long CalculateSumOfSquaresParallel(List<int> numbers)
{
object sumLock = new object();
long sum = 0;

// Parallel.ForEach to calculate sum of squares
Parallel.ForEach(numbers, num =>
{
long square = num * num;

// Synchronize access to the shared 'sum' variable
lock (sumLock)
{
sum += square;
}
});

return sum;
}
}

output:

Explanation:

  • We use the Stopwatch class to measure the time consumed by each approach.
  • Before calling the methods CalculateSumOfSquaresRegular and CalculateSumOfSquaresParallel, we start the respective stopwatches using Stopwatch.StartNew().
  • After each method call, we stop the corresponding stopwatch using stopwatchRegular.Stop() and stopwatchParallel.Stop().
  • We then print the sum of squares and the time consumed by each approach using stopwatchRegular.Elapsed and stopwatchParallel.Elapsed.

Now, when you run the code, it will display the sum of squares and the time consumed for both the regular foreach loop and Parallel.ForEach approaches. You can observe the time difference between the two methods and see how it varies based on the scenario and the workload.

Explicitly Ensuring Parallel Execution:

To ensure explicit parallel execution, you can use the TaskCreationOptions.LongRunning flag with Task.Factory.StartNew, or you can use Task.Run for simplicity.

Example:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

public class Program
{
public static async Task Main()
{
List<Task> tasks = new List<Task>();

// Explicit parallel execution with Task.Factory.StartNew and LongRunning option
tasks.Add(Task.Factory.StartNew(() => DoWork(), TaskCreationOptions.LongRunning));
tasks.Add(Task.Factory.StartNew(() => CalculateResult(), TaskCreationOptions.LongRunning));

await Task.WhenAll(tasks);

Console.WriteLine("All tasks completed.");
}

public static void DoWork()
{
Task.Delay(2000).Wait();
Console.WriteLine("Task without result completed.");
}

public static void CalculateResult()
{
Task.Delay(1000).Wait();
Console.WriteLine("Task with result completed.");
}
}

output:

Explanation:

  • In this example, we use Task.Factory.StartNew with TaskCreationOptions.LongRunning to explicitly ensure that tasks run in parallel.
  • The TaskCreationOptions.LongRunning flag suggests that the tasks may be long-running, and thus a separate thread will be used for execution.
  • We use the Task.WhenAll method to asynchronously wait for all tasks in the tasks list to complete.

Task.Run:

  • Task.Run is a shorthand method to start a task on the ThreadPool using the default TaskScheduler.
  • It is optimized for running short-lived and CPU-bound tasks.
  • The tasks started with Task.Run are queued to the ThreadPool, and if there are available ThreadPool threads, they will run concurrently.
  • If you start multiple Task.Run tasks, they have a higher chance of running in parallel because ThreadPool threads are designed to handle multiple short-lived tasks concurrently.

Task.Factory.StartNew:

  • Task.Factory.StartNew is a more general method that allows you to specify the TaskCreationOptions and TaskScheduler for the task.
  • By default, it uses the TaskScheduler of the current synchronization context (which might be the ThreadPool if there is no synchronization context).
  • The tasks started with Task.Factory.StartNew can end up using different schedulers depending on the context and options specified. If you don't explicitly specify a TaskScheduler, it will use the ThreadPool.
  • The TaskCreationOptions.LongRunning can be used to suggest that the task may be long-running, and in that case, the TaskScheduler.Default (dedicated to long-running tasks) might be used.
  • In the given code example, Task.Factory.StartNew does not specify a custom TaskScheduler or the TaskCreationOptions.LongRunning, so it uses the default TaskScheduler.

Because the default TaskScheduler used by Task.Run and Task.Factory.StartNew is the ThreadPool, which is designed to handle multiple short-lived tasks concurrently, in most cases, both tasks will execute concurrently, and you might observe parallelism. However, there is no guarantee of parallelism, and it can depend on various factors, including the number of available ThreadPool threads and the system's scheduling.

To ensure parallel execution explicitly, you can use the TaskCreationOptions.LongRunning flag with Task.Factory.StartNew, or you can use Task.Run for simplicity, as it is generally optimized for these types of scenarios.

Conclusion:

The Task Parallel Library (TPL) in C# ASP.NET empowers developers to achieve parallel execution, making their applications faster and more scalable. By leveraging tasks and parallel constructs provided by the TPL, C# ASP.NET developers can create concurrent and asynchronous code with ease. The correct usage of async/await and Parallel constructs ensures efficient parallel execution, allowing applications to take full advantage of multi-core processors.

In this article, we explored the key components of the TPL and provided detailed code explanations for each example. Whether you are handling CPU-bound or I/O-bound tasks, the Task Parallel Library is a valuable tool to master for optimizing the performance of your C# ASP.NET applications.

--

--