Multithreading and concurrency

Multithreading is a powerful concept in programming that allows multiple threads of execution to run concurrently within a single process. In the context of C programming, multithreading enables developers to create applications that can perform multiple tasks simultaneously, thereby improving performance and responsiveness.

Basic Concepts

  • Thread: A thread is the smallest unit of execution within a process. Multiple threads within a process share the same memory space and resources.
  • Concurrency: Concurrency refers to the ability of an application to execute multiple threads simultaneously.
  • Parallelism: Parallelism involves executing multiple threads simultaneously on multiple CPU cores to achieve performance improvements.

Benefits of Multithreading:

  • Improved Responsiveness: Multithreading allows applications to remain responsive even when performing intensive tasks.
  • Utilization of Multicore Processors: Multithreading enables efficient utilization of multicore processors, leading to better performance.
  • Simplified Programming: Multithreading simplifies complex tasks by breaking them into smaller, manageable threads.

Creating Threads in C

In C programming, multithreading is typically implemented using libraries such as POSIX Threads (pthreads) or Windows Threads (Win32 threads). Here, we’ll focus on pthreads, which is widely used across different platforms.

				
					#include <pthread.h>
#include <stdio.h>

void *thread_function(void *arg) {
    printf("Thread function is running\n");
    return NULL;
}

int main() {
    pthread_t thread_id;
    pthread_create(&thread_id, NULL, thread_function, NULL);
    pthread_join(thread_id, NULL);
    printf("Thread has terminated\n");
    return 0;
}

				
			
				
					// output //
Thread function is running
Thread has terminated

				
			

Explanation:

  • In the main function, we create a new thread using pthread_create.
  • The thread_function is the function that will be executed by the new thread.
  • After creating the thread, we wait for it to terminate using pthread_join.
  • Finally, we print a message indicating that the thread has terminated.

Synchronization and Mutexes

When multiple threads access shared resources concurrently, it can lead to data inconsistency or race conditions. Mutexes (mutual exclusion) are used to synchronize access to shared resources and prevent such issues.

				
					#include <pthread.h>
#include <stdio.h>

int counter = 0;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

void *thread_function(void *arg) {
    pthread_mutex_lock(&mutex);
    counter++;
    printf("Counter value: %d\n", counter);
    pthread_mutex_unlock(&mutex);
    return NULL;
}

int main() {
    pthread_t thread_id[5];
    for (int i = 0; i < 5; i++) {
        pthread_create(&thread_id[i], NULL, thread_function, NULL);
    }
    for (int i = 0; i < 5; i++) {
        pthread_join(thread_id[i], NULL);
    }
    return 0;
}

				
			
				
					// output //
Counter value: 1
Counter value: 2
Counter value: 3
Counter value: 4
Counter value: 5

				
			

Explanation:

  • We have a shared variable counter that multiple threads increment.
  • We use a mutex (pthread_mutex_t) to ensure that only one thread can access the counter at a time.
  • Each thread locks the mutex before incrementing the counter and unlocks it afterward to allow other threads to access.

Thread Safety and Atomic Operations

In addition to mutexes, atomic operations provide another mechanism for ensuring thread safety by guaranteeing that certain operations are executed indivisibly.

				
					#include <pthread.h>
#include <stdio.h>
#include <stdatomic.h>

_Atomic int counter = 0;

void *thread_function(void *arg) {
    for (int i = 0; i < 100000; i++) {
        counter++;
    }
    return NULL;
}

int main() {
    pthread_t thread_id[5];
    for (int i = 0; i < 5; i++) {
        pthread_create(&thread_id[i], NULL, thread_function, NULL);
    }
    for (int i = 0; i < 5; i++) {
        pthread_join(thread_id[i], NULL);
    }
    printf("Counter value: %d\n", counter);
    return 0;
}

				
			
				
					// output //
Counter value: 500000


				
			

Explanation:

  • We declare counter as an atomic integer using _Atomic keyword.
  • Atomic operations ensure that increments to counter are performed atomically, without interference from other threads.

Deadlocks and Avoiding Them

Deadlocks occur when two or more threads are blocked indefinitely, waiting for each other to release resources. They can arise when threads acquire locks in different orders.

				
					#include <pthread.h>
#include <stdio.h>

pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mutex2 = PTHREAD_MUTEX_INITIALIZER;

void *thread1_function(void *arg) {
    pthread_mutex_lock(&mutex1);
    printf("Thread 1 acquired mutex1\n");
    sleep(1);
    pthread_mutex_lock(&mutex2);
    printf("Thread 1 acquired mutex2\n");
    pthread_mutex_unlock(&mutex2);
    pthread_mutex_unlock(&mutex1);
    return NULL;
}

void *thread2_function(void *arg) {
    pthread_mutex_lock(&mutex2);
    printf("Thread 2 acquired mutex2\n");
    sleep(1);
    pthread_mutex_lock(&mutex1);
    printf("Thread 2 acquired mutex1\n");
    pthread_mutex_unlock(&mutex1);
    pthread_mutex_unlock(&mutex2);
    return NULL;
}

int main() {
    pthread_t thread1, thread2;
    pthread_create(&thread1, NULL, thread1_function, NULL);
    pthread_create(&thread2, NULL, thread2_function, NULL);
    pthread_join(thread1, NULL);
    pthread_join(thread2, NULL);
    return 0;
}

				
			
				
					// output //
Thread 1 acquired mutex1
Thread 2 acquired mutex2


				
			

Explanation:

  • In this example, Thread 1 locks mutex1 first and then mutex2, while Thread 2 does the opposite.
  • This can lead to a deadlock scenario where both threads are waiting for each other to release the mutex they hold.

Thread Safety and Data Races

Thread safety is essential to ensure that shared data is accessed in a consistent and reliable manner by multiple threads. Data races occur when two or more threads concurrently access shared data without proper synchronization, leading to unpredictable behavior.

				
					#include <pthread.h>
#include <stdio.h>

int shared_data = 0;

void *thread_function(void *arg) {
    for (int i = 0; i < 100000; i++) {
        shared_data++;
    }
    return NULL;
}

int main() {
    pthread_t thread_id[5];
    for (int i = 0; i < 5; i++) {
        pthread_create(&thread_id[i], NULL, thread_function, NULL);
    }
    for (int i = 0; i < 5; i++) {
        pthread_join(thread_id[i], NULL);
    }
    printf("Shared data value: %d\n", shared_data);
    return 0;
}

				
			
				
					// output (may vary) //
Shared data value: 287445


				
			

Explanation:

  • Multiple threads increment the shared_data variable concurrently without synchronization.
  • This can result in a data race, where the final value of shared_data is unpredictable.

Condition Variables for Synchronization

Condition variables provide a way for threads to wait for a particular condition to become true before proceeding. They are often used in conjunction with mutexes to implement thread synchronization.

				
					#include <pthread.h>
#include <stdio.h>

pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int shared_data = 0;

void *producer(void *arg) {
    for (int i = 0; i < 10; i++) {
        pthread_mutex_lock(&mutex);
        shared_data = i;
        pthread_cond_signal(&cond);
        pthread_mutex_unlock(&mutex);
        sleep(1);
    }
    return NULL;
}

void *consumer(void *arg) {
    for (int i = 0; i < 10; i++) {
        pthread_mutex_lock(&mutex);
        while (shared_data != i) {
            pthread_cond_wait(&cond, &mutex);
        }
        printf("Consumer: %d\n", shared_data);
        pthread_mutex_unlock(&mutex);
    }
    return NULL;
}

int main() {
    pthread_t producer_thread, consumer_thread;
    pthread_create(&producer_thread, NULL, producer, NULL);
    pthread_create(&consumer_thread, NULL, consumer, NULL);
    pthread_join(producer_thread, NULL);
    pthread_join(consumer_thread, NULL);
    return 0;
}

				
			
				
					// output //
Consumer: 0
Consumer: 1
Consumer: 2
Consumer: 3
Consumer: 4
Consumer: 5
Consumer: 6
Consumer: 7
Consumer: 8
Consumer: 9



				
			

Explanation:

  • In this example, a producer thread sets the shared_data variable and signals the consumer thread using a condition variable.
  • The consumer thread waits for the condition to become true before proceeding.
  • This ensures that the consumer consumes the data produced by the producer in a synchronized manner.

Thread Pooling

Thread pooling is a technique used to manage a group of threads that are created once and reused multiple times to execute tasks concurrently.

				
					#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

#define THREAD_POOL_SIZE 5

void *task(void *arg) {
    int task_id = *((int *)arg);
    printf("Task %d is executing\n", task_id);
    return NULL;
}

int main() {
    pthread_t thread_pool[THREAD_POOL_SIZE];
    int task_ids[THREAD_POOL_SIZE];

    for (int i = 0; i < THREAD_POOL_SIZE; i++) {
        task_ids[i] = i + 1;
        pthread_create(&thread_pool[i], NULL, task, &task_ids[i]);
    }

    for (int i = 0; i < THREAD_POOL_SIZE; i++) {
        pthread_join(thread_pool[i], NULL);
    }

    return 0;
}

				
			
				
					// output //
Task 1 is executing
Task 2 is executing
Task 3 is executing
Task 4 is executing
Task 5 is executing



				
			

Explanation:

  • In this example, a thread pool of size THREAD_POOL_SIZE is created.
  • Each thread in the pool executes a common task (task function) with a unique task ID.
  • After completing the tasks, the main thread waits for all threads in the pool to terminate using pthread_join.

Inter-thread Communication

Inter-thread communication allows threads to exchange data or signals to coordinate their activities effectively. This is essential for building complex multithreaded applications where threads need to work together to accomplish tasks.

				
					#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#define BUFFER_SIZE 5

int buffer[BUFFER_SIZE];
int in = 0, out = 0;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t full = PTHREAD_COND_INITIALIZER;
pthread_cond_t empty = PTHREAD_COND_INITIALIZER;

void produce(int item) {
    pthread_mutex_lock(&mutex);
    while (((in + 1) % BUFFER_SIZE) == out) {
        pthread_cond_wait(&full, &mutex);
    }
    buffer[in] = item;
    in = (in + 1) % BUFFER_SIZE;
    printf("Produced: %d\n", item);
    pthread_cond_signal(&empty);
    pthread_mutex_unlock(&mutex);
}

int consume() {
    int item;
    pthread_mutex_lock(&mutex);
    while (in == out) {
        pthread_cond_wait(&empty, &mutex);
    }
    item = buffer[out];
    out = (out + 1) % BUFFER_SIZE;
    printf("Consumed: %d\n", item);
    pthread_cond_signal(&full);
    pthread_mutex_unlock(&mutex);
    return item;
}

void *producer_thread(void *arg) {
    for (int i = 0; i < 10; i++) {
        produce(i);
        sleep(1);
    }
    return NULL;
}

void *consumer_thread(void *arg) {
    for (int i = 0; i < 10; i++) {
        consume();
        sleep(1);
    }
    return NULL;
}

int main() {
    pthread_t producer, consumer;
    pthread_create(&producer, NULL, producer_thread, NULL);
    pthread_create(&consumer, NULL, consumer_thread, NULL);
    pthread_join(producer, NULL);
    pthread_join(consumer, NULL);
    return 0;
}

				
			
				
					// output //
Produced: 0
Consumed: 0
Produced: 1
Consumed: 1
Produced: 2
Consumed: 2
Produced: 3
Consumed: 3
Produced: 4
Consumed: 4
Produced: 5
Consumed: 5
Produced: 6
Consumed: 6
Produced: 7
Consumed: 7
Produced: 8
Consumed: 8
Produced: 9
Consumed: 9




				
			

Explanation:

  • This example demonstrates a producer-consumer scenario using a shared buffer.
  • The produce function adds items to the buffer, while the consume function retrieves items from the buffer.
  • Mutexes and condition variables are used to ensure that the buffer is accessed safely and that producers and consumers wait appropriately when the buffer is full or empty.

Performance Considerations

While multithreading can improve application performance by leveraging multiple CPU cores, it also introduces overhead in terms of context switching, synchronization, and coordination among threads. Understanding these performance considerations is crucial for optimizing multithreaded applications.

Performance Optimization Techniques:

  • Minimize Synchronization: Reduce the use of synchronization primitives like mutexes and condition variables wherever possible to minimize contention and overhead.
  • Avoid Excessive Context Switching: Limit the number of threads and avoid unnecessary thread creation and destruction to reduce context switching overhead.
  • Optimize Data Access: Use data locality techniques to minimize cache misses and optimize memory access patterns for better performance.
  • Profile and Tune: Profile multithreaded applications using performance analysis tools to identify bottlenecks and areas for optimization. Tune thread affinity, scheduling policies, and other parameters based on profiling results.

Advanced Topics in Multithreading

Beyond the basics, there are several advanced topics in multithreading that are worth exploring to gain a deeper understanding of concurrency in C programming.

1. Thread Safety in Library Functions: Many standard library functions are not inherently thread-safe, meaning they may produce unexpected results when called concurrently by multiple threads. Understanding which library functions are thread-safe and which require synchronization is essential for writing robust multithreaded code.

2. Thread Local Storage (TLS): Thread-local storage allows each thread to have its own unique instance of a variable. This is useful when global variables need to be accessed and modified independently by different threads without synchronization overhead.

3. Lock-Free Data Structures: Lock-free data structures provide a way to perform concurrent operations without using traditional locking mechanisms like mutexes. Instead, they use atomic operations to ensure thread safety, improving scalability and reducing contention.

4. Asynchronous I/O: Asynchronous I/O operations allow threads to perform non-blocking I/O operations, enabling better utilization of system resources and improved responsiveness. Libraries like libuv provide asynchronous I/O support in C, facilitating the development of highly scalable network applications.

5. Thread Scheduling and Priorities: Understanding how thread scheduling works and how thread priorities are assigned by the operating system can help optimize performance and responsiveness in multithreaded applications. Techniques like thread affinity and priority-based scheduling can be used to control the execution behavior of threads.

6. Thread Safety in Custom Data Structures: When working with custom data structures, ensuring thread safety requires careful design and implementation. Techniques such as fine-grained locking, read-write locks, and lock-free algorithms can be employed to achieve thread safety while minimizing contention.

7. Debugging and Testing Multithreaded Code: Debugging multithreaded code can be challenging due to the non-deterministic nature of concurrency bugs. Tools like Valgrind and Helgrind provide support for detecting memory leaks, race conditions, and other threading errors. Additionally, writing comprehensive unit tests and performing stress testing can help uncover concurrency issues early in the development process.

Multithreading and concurrency in C offer powerful capabilities for building efficient and responsive applications. By mastering concepts like thread creation, synchronization mechanisms, deadlock avoidance, and advanced techniques like thread pooling, developers can harness the full potential of multithreading to develop high-performance software solutions. However, it's crucial to understand the complexities involved and apply best practices to ensure thread safety and avoid common pitfalls. With practice and experimentation, programmers can leverage multithreading effectively to tackle challenging problems and create robust, scalable applications in C.Happy coding!❤️

Table of Contents

Contact here

Copyright © 2025 Diginode

Made with ❤️ in India