Monday, March 31, 2014

Sorting and E ciency

Sorting is a process that organizes a collection of datas into ascending/descending 
order. So far, I've learned selection sort, insertion sort, quick sort, merge sort, counting sort, Timsort,  bubble sort and radix sort from lecture and reading materials.

 The efficiency of a sorting algorithm is related to the number of input datas being processed. In pratice, the bubble sort is rarely used to sort large and unsorted datas due to its inefficiency with complexity O(n2). When the number of input datas is small, we usally use selection sort or insertion sort, and will prefer insertion sort on almost sorted datas as both algorithms are O(n2) and only insertion sort has best case O(n).

The practical algorithms are quick sort and merge sort with average complexity  
O(n log n). The idea of quick sort is to choose a pivot and move everything less than the pivot to a list called Left and everything greater than the pivot to the list Right, then quicksoft Left and Right recursively, finally return Left + pivot + Right. For this algorithm, the crucial part is to choose a good pivot, the median would be the best, a poor pivot can result to be a worse case with complexity O(n2). The idea of merge sort is to divide the list of datas in half(Left and Right) and merge soft two halves recursively, finally use the merge function to merge the lists Left and Right. 

The timsort was derived from insertion sort and merge sort, and it's the python's built-in sort algorithm. The idea of timsort is to split the input array into sub-arrays(orderred runs),

  then use insertion sort on each run, finally use merge sort on sorted runs and return the expected array. The best case of timsort is O(n) and worst case is O(n log n).

Sorting is an important area of study in computer science, for different sorting methods, we need to  compare them with respect to their running time to get the most efficient algorithm for the particular list of datas.


week # 10

We continued on Sort big-oh in week 10.  I got basic concepts of sorting(selection sort and insertion sort) and running time analysis last week, although I did not really understand them. In the lecture, prof Heap introduced quick sort and merge sort.
The idea of quick sort is to choose a pivot element, then decide where the pivot goes with respect to the rest of the list, repeat on the partitions. The idea of merge sort is to divide the list in half, merge sort the halves, then merge sort the sorted results. I understand well on the methods of sorting, but big-oh(running time). I have trouble analying the performance of different types of sorting, but I think I can figure it out soon.

Monday, March 17, 2014

Week #9



A node-based binary tree data structure was introduced last week, and we have more on it this week. Data in left subtree of a binary search tree is less than that in the root and data in right subtree is greater than that in the root, both left and right subtree are also binary search tree if they are not None. The nodes in binary tree are called binary tree node which has its own data and left children and right children. Below is an example of a binary search tree of depth 3, with root 8 and leaves 1, 4, 7 and 13




BST is efficient as it allows us to ignore half of the list. For instant, we need to search a value v in a BST with rooted data t, then there are 3 possibilites: rooted data t equals the value v, or t is less than v and then only search right subtree only, or t is greater than v then search left subtree only.

This week I completed Exercise 3 and are starting Assignment 2part2. Again, I'm doing them along and I enjoy coding at this way. When something confuses me, I'll use all sources(google,TA, Piazza,office hour) to figure it out. I also found that it helps to read others slogs, such as this http://haleyyewslog.blogspot.ca/, I got some senses for Assignment 2.

Sunday, March 9, 2014

Week #8

I got my term test paper back this week, it was not bad. When I checked the paper, I saw some stupid mistakes that lowing my mark, this is again teaching me how important it is to double check the codes.

This week we learned another type of recursive data structure -- linked lists, which is either an empty list or a node with value and a reference to a linked list. For example,L is a linked list, L is empty list if L. value = None,L.rest = None. To add a node in front of L, we use the method: prepend. I was very confused about this method, I wasn't sure how it worked until I finished the lab 8. Now, I know that when we call L.prepend(7), then L. value = 7 and L.rest = copy(L), here the only node created is L.rest and assign it to copy of L, then change the value of front node to 7.

To get a really clear idea of how the linked list works, drawing  pictures before and after an operation will be helpful. At least I got a lot of benefits by doing that.

Sunday, March 2, 2014

Recursion

We've been talking about Recursion since the beginning, and I have some experience with this programming technique via doing labs and assignments by now, thus I'm getting deeper insight of it. Generally, Recursion is a technique that can define something in terms of itself, which means that functions can call themselves to solve smaller sub-problems. 

In computer science, there are many cases that are naturally defined recursively or inductively, for that Recursion can be very helpful. For example, we need to define the maximum number in a possibly nested list of numbers(which is a list whose elements are either numbers or nested number lists) with the function name rec_max(nested numer list), then first need to create a empty list new_list;  Secondly we'll look at every element in the list, if the element is a number then append it to new_list, if it is a list then need to call the function rec_max(element); After all, we use the built-in function max(new_list) to get the maximum number. From the above example, the smaller sub-problem is the element that is also a list, and the base case which does not lead to recursive call is the element that is a number. I know how important the base case is, because without it, we'll have infinite recursion and the code will not work.

As Recursion provides a  powerful way to describe recursive data structures that are partially composed of smaller and simpler instances of themselves, such as nested number lists, trees, which helps to solve quite a lot of programming problems, therefore it is one of the central ideas of computer science. And I'm sure that to learn computer science well, I have to learn Recursion well first.

Monday, February 24, 2014

week #6

I'm too shame to say that I forgot to update this weekly Slog, but late is better than none, so here I am.

I was so frustrated during week 6 as I didn't fully complete the assignment 1. Part of reason for that is because I had 3 tests from other courses on week 6, but I must say that the main reason should be me. I didn't start it early, and I didn't seek enough helps that can solve my problem. I won't expect a good mark for the assignment and I did learn the lesson that how important is to manage the time properly.

One more thing that worth to be mentioned is trees, a recursive data structure I just learned this week. Trees are made of nodes possibly with values with directed edges between some pairs of nodes. So far, I'm ok with trees, and I will see if I can handle it on labs and term test.










 

Saturday, February 8, 2014

Week # 5

Last week we were focusing on Recursion, and there's no much more new in this week, mainly Recursion but much far deeper . I thought I was doing well so far for this course until I started the assignment.

 It took me a while to understand the instruction of the assignment, and thanks to the explaination from Danny on the lecture so that I finally knew what's needed to do to start. So far, I still don't finish the assignment. But I believe, I can get it done eventually after more trying and discussing with peers, also of course, checking Piazza regularly.