CS 117
Ondich
Midterm 2
Due ON PAPER 11:10 AM Monday, March 6, 2000

This is an open book, open notes, and open computer test. You may not use any books other than your textbook, and you may not use Internet resources from off campus. If you get stuck, talk to Jeff Ondich, but please don't talk to anyone else about the exam.

  1. (12 points) Consider the following function.

    
    	// precondition: N > 0
    
    	int MysteryFunction( int a[], int N )
    	{
    		int k = a[0];
    
    		if( N > 1 )
    		{
    			int tmp = MysteryFunction( a, N-1 );
    			if( a[N-1] > tmp )
    				k = a[N-1];
    			else
    				k = tmp;
    		}
    
    		//cout << k << endl;
    		return( k );
    	}
    


  2. (10 points) Consider the following function.

    
    	struct Node
    	{
    		int			data;
    		Node		*next;
    	};
    
    	// precondition: head points to a linked list.
    
    	int EnigmaFunction( Node *head )
    	{
    		int		n = 0;
    
    		Node	*current = head;
    		while( current != 0 )
    		{
    			n++;
    			current = current->next;
    		}
    
    		return( n );
    	}	
    


  3. (6 points)



  4. (12 points) Consider the weird little program sesame.cpp.



  5. (2 points) Tell me a joke.

  6. (14 points) Take a look at the sorting program sorts.cpp that we have looked at in class. This problem will give you a chance to compare the performance of Selection Sort, Insertion Sort, and Merge Sort.

    Selection, Insertion, and Merge Sorts are all members of a class of sorting algorithms whose performance is usually measured by counting the number of array element comparisons need to be done to complete the sorting of the array. That is, we count the number of times something like "if( a[i] < a[j] )" gets executed during the run of the algorithm. (Note that in some algorithms, we've stashed a[i] into a temporary variable, so the comparison looks like "if( tmp < a[j] )". It would take several pages to provide a reasonably rigorous argument that the comparison count is a good measure of performance for these algorithms. With a lot less rigor, we can say something like this: by counting comparisons, we are counting the number of iterations of the inner loop, and by counting inner loop iterations, we are measuring the most time-consuming portion of the algorithm.

    So, in what follows, you're going to count comparisons and measure running times for the three algorithms and several values of N. Let's get started.

    1. Fill in the following chart. You'll want to modify the sorting code to include variables that count the array-element comparisons and report those counts to you at the end of sorting. You may find it convenient to use global variables to do this counting.

      To time a program, type "time programname" at the UNIX command line.

      N Comp. count for Sel. Sort Running time for Sel. Sort Comp. count for Ins. Sort Running time for Ins. Sort Comp. count for Merge Sort Running time for Merge Sort
      125 . . . . . .
      250 . . . . . .
      500 . . . . . .
      1000 . . . . . .
      10000 . . . . . .
      20000 . . . . . .
      30000 . . . . . .

    2. Describe any patterns you see in the data you collected to fill in the chart above. (You are, of course, welcome but not required to add extra lines to the chart if you want to test your pattern hypotheses further.)

    3. Is the comparison count for Selection Sort dependent on the initial data? (For example, if you run Selection Sort several different times on different randomly generated arrays of 10000 items, is the comparison count the same every time?) How about Insertion Sort? Merge Sort?

    4. Under what conditions would you choose to use Merge Sort instead of Insertion Sort and Selection Sort? When would you choose Insertion Sort? When would you choose Selection Sort?