-
1-A: Construct large datasets taking random numbers from uniform distribution (UD)
-
1-B: Construct large datasets taking random numbers from normal distribution (ND)
-
2-A: Implement Merge Sort (MS) and check for correctness
-
2-B: Implement Quick Sort (QS) and check for correctness
-
3: Count the operations performed, like comparisons and swaps with problem size increasing in powers of 2, for both MS and QS with both UD and ND as input data.
-
4: Experiment with randomized QS (RQS) with both UD and ND as input data to arrive at the average complexity (count of operations performed) with both input datasets.
-
5: Now normalize both the datasets in the range from 0 to 1 and implement bucket sort (BS) algorithm and check for correctness.
-
6: Experiment with BS to arrive at its average complexity for both UD and ND data sets and infer.
-
7: Implement the worst case linear median selection algorithm by taking the median of medians (MoM) as the pivotal element and check for correctness.
-
8: Take different sizes for each trivial partition (3/5/7 ...) and see how the time taken is changing.
-
9: Perform experiments by rearranging the elements of the datasets (both UD and ND) and comment on the partition or split obtained using the pivotal element chosen as MoM.