Sorting Algorithms in Python

Sorting theory refers to observing algorithms and strategies used to organize and set up a collection of elements in a specific order. Sorting is an essential operation in programming know-how and is widely utilized in various programs.  

Properties  

Here are a few residences generally associated with sorting algorithms:  

  • Time Complexity: Time complexity measures the time it takes for rules to execute based on the input size, with algorithms like bubble kind and selection sort having unique complexities. Understanding this helps investigate algorithm efficiency for different input sizes.  
  • Space Complexity: Space complexity measures the memory needed for an algorithm's operations, with some sorting algorithms requiring extra memory while others require auxiliary systems or temporary space. Considering algorithm distance requirements for large datasets or limited memory sources is crucial.  
  • Stability: Stability in sorting rules ensures elements with identical keys maintain the same order in the sorted output, which is crucial for objects with multiple keys or unique order preservation.  
  • Adaptiveness: An adaptive sorting algorithm optimizes operations based on initial order, enhancing performance in partially or almost sorted lists.  
  • In-Place Sorting: In-place sorting algorithms efficiently manage input data without additional memory, making it useful in high-entry situations.  
  • Comparisons and Swaps: Sorting algorithms are characterized by their number of comparisons and swaps, enhancing efficiency.  
  • Worst-Case and Average-Case Performance: Sorting algorithms' performance varies based on given distribution; understanding worst-case and average-case situations helps evaluate rules' suitability for precise datasets.  

Applications  

Here are a few commonplace applications of sorting:  

  • Data Organization: Sorting is crucial for structured information organization in databases, record systems, and alphabetical or chronological order for efficient indexing and searching.  
  • Search Algorithms: Sorting is crucial for efficient search algorithms, enabling faster techniques like binary search, which removes half of the last search area, ensuring excellent overall performance.  
  • Ordering and Ranking: Sorting order objects based on unique standards, such as rate, recognition, or customer ratings, in e-commerce applications and advice structures.  
  • Data Analysis: Sorting is crucial for data evaluation, identifying patterns, tendencies, and outliers and allowing for easier visualization and analysis in various fields like finance.  
  • Computational Geometry: Sorting is a crucial technique in computational geometry algorithms, determining geometric shapes' relative order and visibility and solving problems like factor pairings and line phase intersections.  
  • Task Scheduling: Sorting efficiently schedules tasks in operating systems and challenge management, prioritizing obligations based on deadlines, priorities, and resource availability.  
  • Data Deduplication: Sorting helps identify and eliminate duplicate statistics by arranging adjacent entries and simplifying data cleaning, integration, and deduplication tasks.  
  • Ranking Sports Teams: Sorting algorithms rank sports activities based on performance metrics, determining league standings and playoff qualification. They offer benefits and disadvantages depending on the algorithm and information being handled.  

Sorting Algorithms in Python  

1.Bubble Sort

Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjoining elements, and swaps them if they're in the incorrect order. This process is repeated until the complete list is taken care of.  

Algorithm 

  • Start at the beginning of the list.  
  • Compare the first and second elements. If they're in the wrong order, switch them.  
  • Please move to the following pair of elements (second and third) and compare them. Swap if necessary.  
  • Continue this system, evaluating and swapping adjacent elements until you reach the end of the list.  
  • Repeat the above steps for each pass via the list till no greater swaps are needed, indicating that the list is sorted.  

Code  

def bubble_sort(arr): 

   n = len(arr) 

   for i in range(n - 1): 

       # Last i elements are already in place 

       for j in range(0, n - i - 1): 

           # Traverse the array from 0 to n-i-1 

           # Swap if the current element is greater than the next element 

           if arr[j] > arr[j + 1]: 

               arr[j], arr[j + 1] = arr[j + 1], arr[j] 

arr = [64, 34, 25, 12, 22, 11, 90] 

bubble_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [11,12,22,25,34,64,90] 

Complexities  

  • The time complexity of the Bubble Sort algorithm is O(n^2) in the worst case and expected case, wherein n is the number of elements within the list. This is because the algorithm plays a maximum of n passes; in every pass, it compares and swaps adjacent elements. The worst-case complexity occurs while the input list is in reverse order, requiring the maximum quantity of swaps.  
  • The space complexity is O(1) because Bubble Sort types the elements in-vicinity without requiring extra memory.  

Properties  

  • Time Complexity: The worst-case and common-case time complexity of bubble sort is O(n^2), wherein n is the variety of factors inside the list. The algorithm iterates the list multiple times, comparing and swapping adjoining elements.  
  • Space Complexity: Bubble type has a space complexity of O(1) as it sorts the elements in place. It does now not require any extra memory proportional to the input length.  
  • Stability: Bubble type is a robust sorting algorithm. It preserves the relative order of factors with equal keys. If two factors have identical costs, they will retain their close order inside the taken care of output.  
  • Adaptive: Bubble type is an adaptive sorting set of rules. If the input list is sorted, the bubble can become aware of this and complete the sorting method with fewer iterations. It assesses for swaps in each pass and prevents early if no trades are achieved.  

Applications  

  • Educational Purposes: Bubble sort is often used in introductory laptop technology and programming guides to illustrate the idea of sorting algorithms. It is easy to understand and implement, making it a perfect starting line for gaining knowledge of approximate sorting strategies.  
  • Small Input Sizes: Bubble sort may help in sorting small lists or arrays with a small number of elements. Its simplicity and ease of implementation make it appropriate for accessible applications where overall performance is only sometimes a considerable subject.  
  • Benchmarking: Bubble sort can function as a benchmark for comparing the performance of other sorting algorithms. By evaluating the time taken with the aid of more efficient algorithms to that of bubble kind, you can still reach the improvement done via the usage of more excellent optimized sorting techniques.  
  • Nearly Sorted Data: Bubble sorting can be powerful for sorting statistics. This is already almost sorted or has a small variety of out-of-place factors. With various swaps, it can quickly pick out and circulate elements to their accurate positions.  

2. Selection Sort

Selection type is a straightforward sorting set of rules that works by time and again finding the minimal detail from the unsorted part of the list and putting it at the beginning of the taken care of component.  

Algorithm 

  • Find the minimal element inside the unsorted part of the listing.  
  • Swap the minimal detail with the primary element of the unsorted component, efficiently including it in the looked-after feature.  
  • Move the boundary of the sorted portion of one element to the right.  
  • Repeat steps 1-3 till the entire listing is looked after.  

Code  

def selection_sort(arr): 

   n = len(arr) 

   for i in range(n - 1): 

       min_index = i 

       for j in range(i + 1, n): 

           if arr[j] < arr[min_index]: 

               min_index = j 

       arr[i], arr[min_index] = arr[min_index], arr[i] 

arr = [64, 25, 12, 22, 11] 

selection_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [11, 12, 22, 25, 64]  

Complexities  

  • The fundamental time complexity of the Selection Sort algorithm is O(n^2), wherein n is the wide variety of elements in the listing. This is because the set of rules plays a maximum of n passes thru the list. Every skip unearths the minimum part through appearing n-i comparisons, where i is the cutting-edge bypass variety.  
  • The area complexity is O(1) because Selection Sort types the factors in place without requiring extra reminiscence.  

Properties  

  • Time Complexity: The selection type's worst-case and average-case time complexity is O(n^2), where n is the number of factors inside the list. This is because the set of rules performs a maximum of n-1 passes, and in every skip, it plays n-i comparisons, in which i is the current skip quantity.  
  • Space Complexity: The selection sort has an area complexity of O (1) since it types the factors in place without requiring additional memory proportional to the entered size.  
  • Stability: The selection must be a solid sorting set of rules. If there are identical factors, their relative order may additionally exchange after the sorting manner.  
  • In-Place Sorting: The selection type plays sorting in-location, meaning it does not require additional reminiscence beyond the original input array. This can be high quality while memory usage is problematic or running with massive datasets.  

Applications  

  • Small Data Sets: The selection type can help sort small lists or arrays with a small range of factors. Its simplicity and simplicity of implementation make it appropriate for simple applications in which performance could be a better subject.  
  • Educational Purposes: Selection sort is often utilized in academic settings to introduce the idea of sorting algorithms. It is easy to recognize and put in force, making it a fantastic starting point for studying sorting strategies.  
  • Partial Sorting: Selection sort may be beneficial, while simplest, a partial type of listing is needed. For example, if you want to find the ok smallest or biggest elements from a more extensive listing, the choice type may be applied to extract the one's factors successfully.  
  • Benchmarking: The selection type can function as a benchmark for comparing the overall performance of other sorting algorithms.  

3. Insertion Sort

Insertion sort is a simple sorting algorithm that builds the final sorted array one detail at a time. It iterates thru the enter list and repeatedly inserts every detail into its proper role inside the already sorted part of the listing.  

Algorithm  

  • Please start with the second detail (index 1) and remember it as the key.  
  • Compare the vital thing with the elements within the listing's looked-after issue (left facet). 
  • Shift the aspects within the sorted part, which can be more than the vital thing to the right.  
  • Insert the essential thing into its proper function inside the sorted component.  
  • Repeat steps 2-4 for the final unsorted factors till the entire list is looked after.  

Code  

def insertion_sort(arr): 

   for i in range(1, len(arr)): 

       key = arr[i] 

       j = i - 1 

       while j >= 0 and arr[j] > key: 

           arr[j + 1] = arr[j] 

           j -= 1 

       arr[j + 1] = key 

arr = [64, 25, 12, 22, 11] 

insertion_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [11, 12, 22, 25, 64]

Complexities  

  • The universal time complexity of the Insertion Sort algorithm is O(n^2) within the worst case and average case, in which n is the range of elements inside the listing. This is because the set of rules plays a maximum of n-1 passes thru the listing, and in each key, it can perform comparisons and shifts as much as the cutting-edge function.  
  • The space complexity is O(1) because Insertion Sort types the elements in-region without requiring extra memory.  

Properties  

  • Time Complexity: The insertion type's worst-case and common-case time complexity is O(n^2), where n is the list's variety of elements. This happens while the input list is in the opposite order. However, for in part looked after or nearly looked after input, insertion sort can have a fine-case time complexity of O(n), making it green in the one's scenarios.  
  • Space Complexity: Insertion sort has an area complexity of O(1) as it kinds the factors in-vicinity without requiring additional reminiscence proportional to the input size.  
  • Stability: Insertion sort is a robust sorting set of rules. It preserves the relative order of elements with equal keys. If two factors have an identical price, they may maintain their close order in the sorted output.  
  • Adaptive: Insertion sort is an adaptive sorting algorithm. If the input listing is already partly looked after, insertion type can efficiently identify the taken care of portion and minimize pointless comparisons and shifts. This makes it efficient for nearly-looked-after or partially looked-after data.  

Applications  

  • Small Data Sets: Insertion can be beneficial for sorting small lists or arrays with a small variety of factors. Its simplicity and ease of implementation make it suitable for accessible applications in which performance could be a better-sized difficulty.  
  • Partial Sorting: Insertion type can effectively type a partially looked after or nearly sorted list. It calls for fewer comparisons and shifts in such situations, making it a great choice when they enter data with a few pre-soreness stages.  
  • Online Sorting: Insertion sort is properly-desirable for scenarios in which new factors are constantly delivered to take care of listing in actual time. With each new element, the insertion type can correctly insert it into the proper function inside the present looked-after component.  
  • Sorting Small Subarrays: In a few sorting algorithms, like quicksort or merge sort, insertion sort is used to kind small subarrays. When the dimensions of the subarray reach a threshold, the set of rules switches to the insertion kind as it will become more excellent green for smaller inputs.  
  • Teaching Tool: Insertion kind is often used as a coaching tool to introduce sorting algorithms and illustrate the ideas of comparisons, shifts, and in-region sorting.   

4. Merge Sort

Merge type is a divide-and-conquer over sorting a set of rules that divides the input listing into smaller halves, sorts them recursively, and then merges them again to gain the final sorted result.  

Algorithm  

  • Divide the unsorted listing into halves.  
  • Recursively sort each half by applying the merge type algorithm.  
  • Merge the two sorted halves lower back collectively to obtain the final sorted listing.  

Code  

def merge_sort(arr): 

   if len(arr) > 1: 

       mid = len(arr) // 2 

       left_half = arr[:mid] 

       right_half = arr[mid:] 

       merge_sort(left_half) 

       merge_sort(right_half) 

       i = j = k = 0 

       while i < len(left_half) and j < len(right_half): 

           if left_half[i] < right_half[j]: 

               arr[k] = left_half[i] 

               i += 1 

           else: 

               arr[k] = right_half[j] 

               j += 1 

           k += 1 

       while i < len(left_half): 

           arr[k] = left_half[i] 

           i += 1 

           k += 1 

       while j < len(right_half): 

           arr[k] = right_half[j] 

           j += 1 

           k += 1 

arr = [64, 34, 25, 12, 22, 11, 90] 

merge_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [11, 12, 22, 25, 34, 64, 90]  

Complexities  

  • The expected time complexity of the Merge Sort algorithm is O(n log n) in the worst case, average case, and excellent case, in which n is the range of elements in the listing. This is because the set of rules divides the input list into halves recursively, and in each level of recursion, it plays a merge operation that takes linear time.  
  • The space complexity is O(n) due to the additional reminiscence required to save the merged subarrays for the duration of the sorting technique.  

Properties  

  • Time Complexity: The worst-case, common-case, and first-class-case time complexity of the merge sort is O(n log n), wherein n is the range of elements in the listing. Merge style achieves this time complexity by dividing the list into halves and playing a linear-time merge operation.  
  • Space Complexity: Merge kind has a space complexity of O(n) because it calls for extra reminiscence to store the merged subarrays at some stage in the sorting technique. This extra reminiscence is proportional to the size of the input.  
  • Stability: The merge sort is a robust sorting algorithm. It preserves the relative order of factors with the duplicate keys. If two elements have an equal price, they will keep their close order inside the taken care of output.  
  • Divide-and-Conquer: Merge type follows the divide-and-conquer paradigm, dividing the input into smaller sub-problems, fixing them recursively, after which combining the answers to gain the very last result. This makes it quite a modular and green sorting set of rules.  

Applications  

  • Sorting Large Data Sets: Merge is appropriate for sorting extensive record units or arrays. Its green time complexity of O(n log n) guarantees top overall performance even for big input sizes.  
  • External Sorting: Merge sort is usually used for external sorting situations wherein the facts set cannot be healthy into reminiscence. It involves dividing the records into smaller chunks, sorting them in my view, and merging the looked-after fragments using merge sort.  
  • Sorting Linked Lists: The merge sort is well-perfect for sorting related lists, as its divide-and-conquer approach works efficaciously with related listing statistics systems. It no longer requires random get right of entry to elements, making it high quality over different sorting algorithms.  

5. Quick Sort

Quick sort is a broadly used sorting algorithm that follows the divide-and-conquer technique. It selects a pivot point and divides the array into subarrays, one with factors less than the pivot and another with more excellent elements. It then recursively applies the equal procedure to the subarrays till the whole array is looked after.  

Algorithm  

  • Select a pivot point from the array (typically the last point).  
  • Partition the array into subarrays with minor elements than the pivot and the other with more significant features.  
  • Recursively implement quick sorting to the subarrays.  
  • Combine the sorted subarrays to obtain the very last sorted array.  

Code  

def partition(arr, low, high): 

   pivot = arr[high] 

   i = low - 1 

   for j in range(low, high): 

       if arr[j] <= pivot: 

           i += 1 

           arr[i], arr[j] = arr[j], arr[i] 

   arr[i + 1], arr[high] = arr[high], arr[i + 1] 

   return i + 1 

def quick_sort(arr, low, high): 

   if low < high: 

       pivot_index = partition(arr, low, high) 

       quick_sort(arr, low, pivot_index - 1) 

       quick_sort(arr, pivot_index + 1, high) 

arr = [64, 34, 25, 12, 22, 11, 90] 

quick_sort(arr, 0, len(arr) - 1) 

print("Sorted array:", arr)

Output  

Sorted array: [11, 12, 22, 25, 34, 64, 90] 

Complexities  

  • The usual time complexity of the quick sort set of rules is O(n log n) inside the typical case and pleasant case and O(n^2) in the worst case, where n is the number of elements inside the listing.  
  • The area complexity is O(log n) in the average case due to the recursive calls within the partition step. However, in the worst case, the distance complexity can be O(n) if the recursion intensity becomes the same as the input length.  

Properties  

  • Time Complexity: The common-case and satisfactory-case time complexity of quick sort is O(n log n), where n is the range of factors in the list. This efficiency arises from its divide-and-triumph over method. However, the worst-case time complexity is O(n^2), which takes place while the pivot choice is unbalanced, central to exceptionally choppy partitioning.  
  • Space Complexity: Quick sort has an area complexity of O(log n) within the ordinary case because of the recursive calls because it calls for memory for the feature name stack. However, sorting implementations use in-place partitioning, ensuing in an area complexity of O(1) within the excellent case.  
  • Unstable Sorting: Quick sort is typically a risky sorting algorithm, which means it can no longer preserve the relative order of equal elements. However, it could be applied as a solid sort using extra operations during the partitioning technique.  
  • Divide-and-Conquer: Quick sort follows the divide-and-conquer over paradigm by dividing the enter into smaller sub-problems, fixing them recursively, and then combining the answers to attain the final result. This makes it a surprisingly modular and efficient sorting algorithm.  

Applications  

  • Sorting Large Data Sets: Quick sort is noticeably efficient for sorting big information units because of its average-case time complexity of O(n log n). It is widely utilized in practice for sorting massive arrays or lists.  
  • General-Purpose Sorting: Quick sort is a flexible sorting set of rules that performs well on various statistics units and enter sizes. It can manage each small and huge data set successfully.  
  • In-Place Sorting: Quick sort may be carried out as an in-place sorting set of rules, which means it calls for minimal extra memory past the unique enter array. This makes it excellent when memory usage is problematic or while operating with big datasets.  

6. Heap Sort

Heap type is a comparison-primarily based sorting set of rules that uses a binary heap records shape to sort factors. It entails two principal steps:  

  • Growing a heap from the enter array time and again.  
  • Extracting the maximum detail from the bank to construct the looked-after output array.  

Algorithm  

  • Build a max heap from the input array. This involves arranging the factors in a binary heap shape, where the figure node is constantly more significant than its children.  
  • Swap the root element (most) with the last detail of the heap and reduce the heap length.  
  • Heapify the decreased heap to preserve the heap assets.  
  • Repeat steps 2-3 until the heap is empty.  
  • The looked-after array is obtained by reversing the extracted factors.  

Code  

def heapify(arr, n, i): 

   largest = i # Initialize the largest element as the root 

   left = 2 * i + 1 

   right = 2 * i + 2 

   # Check if the left child of the root exists and is greater than the root 

   if left < n and arr[left] > arr[largest]: 

       largest = left 

   # Check if the right child of the root exists and is greater than the root 

   if right < n and arr[right] > arr[largest]: 

       largest = right 

   # Swap the root if necessary 

   if largest != i: 

       arr[i], arr[largest] = arr[largest], arr[i] 

       heapify(arr, n, largest) 

def heap_sort(arr): 

   n = len(arr) 

   # Build a max heap 

   for i in range(n // 2 - 1, -1, -1): 

       heapify(arr, n, i) 

   # Extract elements from the heap one by one 

   for i in range(n - 1, 0, -1): 

       arr[0], arr[i] = arr[i], arr[0] 

       heapify(arr, i, 0) 

arr = [64, 34, 25, 12, 22, 11, 90] 

heap_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [11, 12, 22, 25, 34, 64, 90]

Complexities  

  • The overall time complexity of the heap sort set of rules is O(n log n) inside the worst case, average case, and quality case, where n is the variety of elements in the list. Building the heap takes O(n) time, and the repeated extraction of most features takes O(log n) time for each detail.  
  • The space complexity is O(1) because Heap Sort sorts the factors in place without requiring additional memory proportional to the entered size.  

Properties  

  • Time Complexity: The time complexity of Heap Sort is O(n log n) inside the worst case, not unusual case, and excellent point, in which n is the wide variety of factors within the list. This efficiency arises from the heapify operation and the repeated extraction of the most element from the heap.  
  • Space Complexity: Heap Sort has a space complexity of O(1) as it sorts the elements in-place without requiring additional memory proportional to the enter length.  
  • Unstable Sorting: By default, heap sort is an unstable sorting algorithm, which means it may now not hold the relative order of the same elements. However, it could be modified to be stable via extra operations during the swapping step.  
  • In-Place Sorting: Heap sort is an in place sorting set of rules, which means it doesn't require additional memory beyond the original input array. This can be superb while reminiscence usage is a situation or while operating with large datasets.  

Applications  

  • Sorting Large Data Sets: Heap sort is green for sorting massive information sets due to its time complexity of O(n log n). It is frequently used for sorting big arrays or lists in exercises.  
  • Priority Queue: Heap sort is usually used to put into effect precedence queues, where the detail with the highest precedence is constantly at the foundation of the heap. Extracting the elements in the favoured order efficaciously is far more viable by appearing as a heap of features.  
  • In-Place Sorting: Heap sort is an in-region sorting algorithm, making it suitable for scenarios where reminiscence utilization is a concern or when operating with big datasets.  

7. Radix Sort 

Radix sort is a non-comparative sorting algorithm that kinds factors based on their digits or characters. Its methods the input elements digit via number, from the least massive digit to the maximum sizable digit, using counting type or some other solid sorting set of rules as a subroutine.  

Algorithm  

  • Identify the most element from the enter array to determine the number of digits.  
  • For each digit position, beginning from the least good-sized digit: Use a solid sorting algorithm (e.g., counting type) to type the elements based on the modern-day digit. Reorder the factors in line with the sorted digit.  
  • Repeat step 2 for each index, transferring from the least enormous to the most sizeable.  
  • The elements at the moment are looked after.  

Code  

def counting_sort(arr, exp): 

   n = len(arr) 

   output = [0] * n 

   count = [0] * 10 

   for i in range(n): 

       index = arr[i] // exp 

       count[index % 10] += 1 

   for i in range(1, 10): 

       count[i] += count[i - 1] 

   i = n - 1 

   while i >= 0: 

       index = arr[i] // exp 

       output[count[index % 10] - 1] = arr[i] 

       count[index % 10] -= 1 

       i -= 1 

   for i in range(n): 

       arr[i] = output[i] 

def radix_sort(arr): 

   max_element = max(arr) 

   exp = 1 

   while max_element // exp > 0: 

       counting_sort(arr, exp) 

       exp *= 10 

arr = [170, 45, 75, 90, 802, 24, 2, 66] 

radix_sort(arr) 

print("Sorted array:", arr)

Output  

Sorted array: [2, 24, 45, 66, 75, 90, 170, 802]  

Complexities  

  • The typical time complexity of the Radix Sort algorithm is O(d * (n +k)), in which d is the number of digits, n is the wide variety of elements in the center array, and k is the variety of the factors.  
  • The area complexity is O(n+ k) because of the auxiliary arrays used in the counting step. However, if the variety of the elements is appreciably more extensive than the number of factors, the complexity may be simplified to O(d * n).  

Properties  

  • Time Complexity: The time complexity of radix sort is O(d * (n +k)), where d is the number of digits, n is the number of elements within the enter array, and ok is the range of the features. It performs counting sort or any other solid sorting algorithm for every digit position, ensuing in a linear time complexity.  
  • Space Complexity: Radix sort has an area complexity of O(n * k) due to the auxiliary arrays used within the counting type step. The space required is proportional to the number of elements and the variety of features.  
  • Stability: Radix sort is a stable sorting set of rules which preserves the relative order of factors with identical keys. This property may be high-quality when maintaining the original order or sorting gadgets with multiple keys.  
  • Non-Comparative Sorting: Radix sort is a non-comparative sorting algorithm that would not involve direct element comparisons. It exploits the positional records of the factors to type them.  

Applications  

Digital Image Processing: Radix sort can be implemented in virtual photograph processing duties, such as image enhancement or filtering. It can effectively sort pixel values based on intensity or colour components.  

Integer Sorting: Radix sort is generally used for sorting integers with constant-duration keys. It can cope with massive integer datasets effectively and is frequently used as a subroutine in other sorting algorithms or database operations.  

String Sorting: Radix sort can sort strings with fixed-length keys or keys that can be transformed into integers. By treating each character function as a digit, the radix kind can type lines based on their characters' ASCII or Unicode values.