-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsorting.tex
More file actions
98 lines (56 loc) · 4.54 KB
/
sorting.tex
File metadata and controls
98 lines (56 loc) · 4.54 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
\chapter{Sorting}
Now that we have a handle on sorting =,
\section{Quadratic-Time Algorithms}
\subsection{Bubble Sort}
\subsection{Selection Sort}
Unlike Bubble Sort, Selection Sort has an actual use case.
While the number of comparisons is always $O(n^2)$, the number of exchanges is $O(n)$.
That means that we are doing only a single swap for every item we have to sort.
In other words, sorting on a computer assumes that comparisons are more expensive operation, but if that actual exchange of items is what is expensive, you should definitely consider Selection Sort.
This could be the case if we are moving
\subsection{Insertion Sort}
\section{Log-Linear Sorting Algorithms}
The most commonly used sorting algorithms take $ O(n \lg(n)) $ time.
This is the hard limit on runtime %TODO Cite or correct
\subsection{Tree Sort}
The tree sort is the simplest algorithm to we will cover. Performing Tree sort is a matter of three simple steps
\begin{enumerate}
\item Create a tree.
\item Load the items you want to sort into the tree.
\item Perform an inorder traversal of the tree.
\end{enumerate}
The performance of this algorithm depends completely on the type of tree we create for this algorithm. Using a self-balancing binary search tree, adding $ n $ items to the tree takes $ O(n\lg(n)) $ and an in order traversal takes $ O(n) $ steps, for a grand total of $ O(n) $ runtime. Using a binary search tree that does not self balance means that there is a worst case scenario of $ O(n^{2}) $ for adding all the $ n $ items.
Using a tree also means we use extra space since all the data has to be moved into a tree, using $ O(n) $ space.
\subsection{Heap Sort}
You might expect that heapsort deserves the same treatment as treesort.
After all, a heap has the same structure as a tree and both are constructed to perform operations in $log n $ time.
\subsection{Heapify}
\subsection{Quick Sort}
\subsection{Merge Sort}
\section{Unique Sorting Algorithms}
\subsection{Shell Sort}
The time complexity of Shell Sort is still an open problem.
\subsection{Radix Sort}
all of our prior algorithms relied on sorting items by comparing them with each other; Radix sort is unique in that no comparisons occur.
\section{State of the Art Sorting Algorithms}
\subsection{Tim Sort}
\subsection{Quick Sort}
\section{But What if We Add More Computers: Parallelization and Distributed Algorithms}
%GENERATED BY chapgpt
Parallel sorting algorithms are designed to be executed on a single computer with multiple processors or cores, while distributed sorting algorithms are designed to be executed on a network of computers working together. Both types of algorithms can be used to significantly improve the performance of sorting for large data sets, especially when the data does not fit in the memory of a single computer.
There are many different parallel and distributed sorting algorithms, each with its own characteristics and trade-offs. Some common techniques used in these algorithms include:
Data partitioning: Splitting the data into smaller chunks that can be sorted independently and then merged back together.
Load balancing: Ensuring that the work is distributed evenly among the available processors or computers.
Communication: Allowing the processors or computers to communicate and exchange data during the sorting process.
Some examples of parallel and distributed sorting algorithms include:
Parallel merge sort: A parallel version of the merge sort algorithm that divides the data into smaller chunks and sorts them in parallel, then merges the sorted chunks back together.
MapReduce: A programming model for distributed computing that is often used for sorting large data sets in a distributed environment, such as on a cluster of computers.
Bitonic sort: A parallel sorting algorithm that uses a recursive divide-and-conquer approach to sort the data using a network of processors.
There are many other parallel and distributed sorting algorithms as well, each with their own specific characteristics and trade-offs. If you are interested in learning more about these algorithms, you may want to consider reading more about parallel and distributed computing, as well as specific techniques such as data partitioning, load balancing, and communication.
\subsubsection{Parallel VS Distributed}
\section{Further Reading}
\subsection{Pedagogical Sorting Algorithms}
\subsubsection{Bogo Sort}
\subsubsection{Sleep Sort}
\subsubsection{Stooge Sort}
This is primarily used as a means of testing students on using the \textbf{Master Theorem} for calculating the time complexity for algorithms.