/sorting_algorithms

Sorting algorithms & Big O in C

Primary LanguageC

Sorting algorithms are a set of procedures for rearranging a list or an array of items into a specific order. This order can be ascending or descending based on some criteria, such as numerical value or lexicographic order. Sorting is a fundamental operation in computer science and is used in various applications, including searching, data analysis, and more.

There are numerous sorting algorithms, each with its advantages and disadvantages. Some of the most commonly used sorting algorithms include:

Bubble Sort: This algorithm repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.

Selection Sort: Selection sort divides the list into a sorted and an unsorted region. It repeatedly selects the minimum element from the unsorted region and moves it to the sorted region.

Insertion Sort: Insertion sort builds the final sorted array one item at a time. It takes each element from the input list and inserts it into its correct position within the sorted array.

Merge Sort: Merge sort is a divide-and-conquer algorithm that divides the input list into smaller lists, sorts those lists, and then merges them back together.

Quick Sort: Quick sort is also a divide-and-conquer algorithm. It selects a 'pivot' element and partitions the list into two sublists: elements less than the pivot and elements greater than the pivot. It then recursively sorts the sublists.

Heap Sort: Heap sort uses a binary heap data structure to sort elements. It repeatedly removes the maximum element from the heap and places it at the end of the sorted array.

Radix Sort: Radix sort is a non-comparative integer sorting algorithm that sorts data with integer keys by grouping the keys by individual digits or bytes.

The efficiency of sorting algorithms is often analyzed in terms of their time complexity, which is expressed using Big O notation. Big O notation describes the upper bound on the running time of an algorithm in the worst-case scenario concerning the input size. It helps us understand how an algorithm's performance scales as the input data grows. Some common Big O complexities for sorting algorithms are:

O(n^2): This represents algorithms like Bubble Sort and Selection Sort, which have quadratic time complexity.

O(n log n): Algorithms like Merge Sort and Quick Sort exhibit this complexity, which is more efficient than O(n^2) algorithms but not as efficient as linear time sorting.

O(n): Linear time sorting algorithms, like Counting Sort and Radix Sort, achieve this complexity for specific types of data.

O(n log n): Heapsort and some variants of Quick Sort can achieve this time complexity on average.

The choice of sorting algorithm depends on the specific requirements of the task at hand and the characteristics of the data to be sorted. Different algorithms may be more suitable for different situations based on their time and space complexity.