Time Complexity Big O
For instance let's assume n is equal to the length of the input into the program, and a program's time complexity is 3n^2 + 2. Then this inequality will hold:3n^2 + 2 <= cn^2
The c on the right side is some real positive number (basically any possible number above zero).
The purpose of this real number is to account for all the constants that are added or multiplied on the left hand side - so that the left side is 'upperbounded' by the right.
The n subscript o on the graph represents a 'breakpoint' where the red line (function) becomes the upperbound of the blue function. Any value of n past this point will always follow the relationship of:
f(n) <= cg(n).
Similarly, there will some breakpoint for 3n^2 + 2 <= cn^2 as well, where cn^2 will always be greater or equal to 3n^2 + 2.
This was a very crucial piece of information for the class, because it bridged our intuition about taking the largest power as the Big O, and our mathematical sensibilities.
I think this was one of the few times that CSC165 played any sort of role in CSC148; anyone taking that course would not only understand the definition of big O, but be able to formally prove and disprove even the ugliest looking time complexities using underestimation and overestimation. An ego boost for those students I'm sure.
All of this time complexity analysis allowed us to parse the running time of quick and merge sort quite fluidly. I personally gained some intuition about merge sort besides the fact that I knew that the 'breakdown' step of mergesort had a running time of logn due to the fact that the input was broken in half each step until you have reduced the pieces to just 1 element, and that the 'merge' step had a running time of n.
Hashing
Next we talked about hashing and its ability to make lookups, insertion and deletion even faster than quick sort and merge sort. The reason for such quick operations is that each element is 'hashed' or given some kind of unique numerical identifier via a function. I really wish this topic was taught before assignment 2 was due, as it would have helped people speed up their program's running time. I personally discovered this myself, and utilized an Ordered Dictionary for faster lookups and insertion.
We also lightly covered hash collision where some items might receive the same unique identifier. One method of dealing with it was chaining, where a kind of linked list situation occurs and mutliple items are 'filed' under a unique ID number.
The other method of dealing with collisions is 'probing', and from what I can understand, it basically finds the next empty space in the hash table and places the item there.
Final Thoughts
Overall I feel like the course was a good step up from CSC 108, as it moved beyond mastering python mechanics, to working with interesting data structures and problems (recursion and assignment 2 puzzle solver building).
I think the high point for me was figuring out how to make both depth first and breadth first solve work in assignment 2. I must have some kind of masochistic element within me, because I probably hated figuring out what was wrong until the point of the program actually working. Then my brain did some acrobatics and rewired itself to interpret the past hours of debugging as enjoyable.
Key lessons learned from that was understanding what makes a program slow down or speed up, which depends alot on loop usage, data structures (lists versus dict, dequeues versus regular), and fully understanding tree structures and linked lists.
I say linked lists because that was entirely crucial to doing the solvers properly, as you had to build a single linked list using a 'parent' attribute.
All in all, it's just really great assignment design to incorporate almost everything we learned so far in 108 and 148.
