# ROB 502: Programming for Robotics: Class 11

``````int foo(int a, int b) {
for (int i = 0; i < b; i++) {
a++;
}
return a;
}
``````

Now this function does the same thing, but in a different way. How many operations does it take if `a = 10` and `b = 20`?

Each time through the loop, we have to check i < b, perform a++, and then i++. The very last time through the loop, we check i < b and it is false, and we exit the loop. This results in 20 * 3 + 1 = 61 operations.

How about this one? And this time, assume that `a` and `b` can vary, so give your answer in terms of those numbers.

``````int foo(int a, int b) {
int c = 0;
for (int i = 0; i < a; i++) {
c++;
}
for (int i = 0; i < b; i++) {
c++;
}
return c;
}
``````

Very similar to before, except now we get A * 3 + 1 for the first loop, and B * 3 + 1 for the second loop, for a total of 3a + 3b + 2 operations

Let’s consider searching for an element in a list. Suppose we have an array of `N` arbitrary integers, and we are looking for the number 42. How many operations will it take to find the number or conclude it isn’t in the array? In the best case, the first number you check 42 and you are done, but in the worst case you have to check every number to discover 42 isn’t there. What would be reasonable to say is the number of operations in the average case?

In the best case scenario, we enter into a loop, check the index (i < N) and then check the number (array[i] == 42), for a total of 2 operations. In the worst case scenario, we complete the full 3N + 1 operations like before. In the average we have to check half the elements, which here is like averaging the best and worst case scenarios, for (2 + 3N + 1) / 2 = 1.5N + 1.5 operations.

Now let’s suppose that our array of `N` integers has been sorted in ascending order. Now we can use a binary search to look for our number! Now what is the worst case number of operations? You may want to look at your binary search code to get the numbers right, but the rough number is fine.

Each iteration of the binary search we manage to divide the total space in half. This means we will need at worst a log2(N) iterations to find our number. In each iteration, my code for binary search checks a while loop condition (low_i < high_i) for 1 operation, calculates the next index to check ()(low_i + high_i) / 2) for 2 more operations, checks the value (vals[mid_i] >= search_num) for 1 more operation, and in the worst case has to also check (vals[mid_i] < search_num) for another operation, and finally set low_i = mid_i + 1, for 1 last operation. This makes a total of 6 operations per iteration. Finally, after exiting the loop, I have 2 more operations to check if we actually found anything. This makes for a total of 6 * log2(N) + 2 operations, although your own code could have a different result. As a binary search, though, the log2(N) part is important. Finally, I want to note that in the linear array search we have a leading constant of 3 operations for each iteration, but for this more complex binary search we have 6 operations per iteration. This is a common general trend, where more complex algorithms can run faster when N is large, but may actually run slower for smaller N.

## Big-O Notation

It was likely that you found trying to get the exact number of operations in each of these cases to be a little tedious. In many cases, we are interested in how a program or algorithm behaves as its inputs become large; properly bookkeeping an extra 2 operations is not worthwhile. It turns out that the vast majority of useful information about an algorithm can be expressed by only looking at the fastest growing terms in the number of operations, and ignoring coefficients. We only keep multiple terms when they both grow at the same speed. If the largest term is a constant, then we write it as "1". We do not indicate the base of logarithmic functions. To indicate that we have thrown away all these details, we "hide" them in a function called O, the capital letter O.

For example:

``````100         -> O(1)
2A+1        -> O(A)
2A+3B       -> O(A+B)
3N^2+log(N) -> O(N^2)
``````

Now, using Big-O notation, what is the time complexity of our binary search through N elements for 42?

The binary search for my code takes a worst case 6 * log2(N) + 2 operations. Dropping all the constants, including the implicit constant in the logarithm being base-2, we get log(N) as the complexity.

### "Quiz": Choosing a data structure

Choices:

1. Static array
2. Dynamic array
3. Queue
4. Hash Map
5. B-tree
6. Priority queue