ROB 599: Programming for Robotics: Class 11

Complexity

Today we are going to explore complexity, the study of how long it takes a program to run (time complexity), and how much memory it needs (space complexity).

How many operations?

We are going to try to analyze several mathematical calculations. In the following, please consider each addition, subtraction, multiplication, division, or comparison as one operation each, and ignore everything else.

int foo(int a, int b) {
    return a + b;
}

This uses only a single operation.

int foo(int a, int b) {
    for (int i = 0; i < b; i++) {
        a++;
    }
    return a;
}

Now this function does the same thing, but in a different way. How many operations does it take if a = 10 and b = 20?
11.1 Use p4r-clicker to submit your answer

How about this one? And this time, assume that a and b can vary, so give your answer in terms of those numbers.

int foo(int a, int b) {
    int c = 0;
    for (int i = 0; i < a; i++) {
        c++;
    }
    for (int i = 0; i < b; i++) {
        c++;
    }
    return c;
}

11.2 Use p4r-clicker to submit your answer

Now these are all pretty trivial examples because they are just different ways of adding two numbers, but the principle holds that there are always multiple algorithms for any desirable operation.

Let’s consider searching for an element in a list. Suppose we have an array of N arbitrary integers, and we are looking for the number 42. How many operations will it take to find the number or conclude it isn’t in the array? In the best case, the first number you check 42 and you are done, but in the worst case you have to check every number to discover 42 isn’t there. What would be reasonable to say is the number of operations in the average case?
11.3 Use p4r-clicker to submit your answer

Now let’s suppose that our array of N integers has been sorted in ascending order. Now we can use a binary search to look for our number! Now what is the worst case number of operations? You may want to look at your binary search code to get the numbers right, but the rough number is fine.
11.4 Use p4r-clicker to submit your answer

Big-O Notation

It was likely that you found trying to get the exact number of operations in each of these cases to be a little tedious. In many cases, we are interested in how a program or algorithm behaves as its inputs become large; properly bookkeeping an extra 2 operations is not worthwhile. It turns out that the vast majority of useful information about an algorithm can be expressed by only looking at the fastest growing terms in the number of operations, and ignoring coefficients. We only keep multiple terms when they both grow at the same speed. If the largest term is a constant, then we write it as “1”. We do not indicate the base of logarithmic functions. To indicate that we have thrown away all these details, we “hide” them in a function called O, the capital letter O.

For example:

100         -> O(1)
2A+1        -> O(A)
2A+3B       -> O(A+B)
3N^2+log(N) -> O(N^2)

Now, using Big-O notation, what is the time complexity of our binary search through N elements for 42?
11.5 Use p4r-clicker to submit your answer

Choosing a data structure

When a program needs to run efficiently, it is important to consider how the data are going to be stored. The data structures you use will determine both how complicated your program is to write and also what algorithms are available to you.

In choosing the data structures, you need to consider what operations on your data will be most common:

  • How many elements will your data structure have to contain?
  • Will you continually add new values?
  • Does you data structure stay constant after it is created?
  • Does the order of your data matter? Does order even have a meaning to it?
  • How often do data get removed/deleted?
  • Do you continuously perform any special operations on your data? Like taking the minimum/maximum, perform k-nearest neighbors, etc?
  • When you access the data, do you access all the values, or just certain ones?

Depending on these questions, you may come to very different conclusions:

  • If I have a constant number of elements that I will process in order, I use a static array.
  • If I have few elements (say less than 100 total over time) a dynamic array/vector is often the most efficient choice, because the constant factors hidden by Big-O notation will likely dominate cost. Vectors are ideal when the most common operation is performing some function on every element.
    • Appending and removing elements from the end of the array are both O(1) time.
    • If I need to maintain specific element order, inserting and deleting elements from arbitrary locations each take O(n) time.
    • If I do not care about element order, adding and deleting elements each take O(1) time. We can perform an arbitrary delete by swapping an element with the last element, and then deleting the last element.
    • Regardless, finding an element takes O(n) time.
  • If I have continuous data to process in the order it arrives, e.g. network traffic, a queue such as a circular buffer can both append and delete from the beginning and the end in O(1) time.
  • If I want to determine which words are most frequent in a corpus of text, or look up data through some kind of “key”, a hash table can lookup entries/words by name or any other key in O(1) time. Even when performance does not matter, hash tables can be very convenient when you want to associate one form of data to another. They can also be called hash maps, associative arrays, or dictionaries.
  • If I want to keep track of which unique features I have found in an image or document, or keep track of a set of arbitrary elements, a hash set (a hash table that doesn’t store values) can insert elements to that set, check if an element is contained in the set, and remove items from the set in O(1) time.
  • If I need to keep elements in sorted order so that I can iterate through sorted ranges of values or find a desired element by value/key quickly, a B-tree can insert, delete, and find elements in O(log n) time. B-trees are famously used in file systems and databases.
  • The B-tree and hash tables are alternatives because they can both quickly insert, delete, and find by key values. In general hash tables are faster, but don’t support fast iteration. In addition, hash tables can not maintain any order, so iteration is non-deterministic. A B-tree is necessary to quickly iterate through a range of values.
  • If I want to implement the A* search algorithm, I need to process and remove the smallest element at each step. A priority queue implemented with a binary heap maintains elements in a quasi-sorted order, giving us the minimum or the maximum element (but no others) in O(1) time, and allowing us to both remove that minimum/maximum element as well as add an entirely new element, in O(log n) time.
  • Although they are a fun exercise for learning about pointers, linked lists are only very rarely a good choice of data structure. They suffer from memory fragmentation, high memory overhead, frequent memory allocations, and a lack of random access. Some of their true uses include concurrent lock-free data structures and kernel programming. Those are very niche and advanced use-cases!
  • There are many other data structures that have interesting properties for very specific applications, such as the ternary search tree we implemented for encoding a spell check dictionary. One of the data structures above (except for linked lists), will be an excellent fit for something like 99% of use cases.
  • As an example, computing k-nearest neighbors in high dimensions is a common task for path planning algorithms such as the rapidly exploring random tree (RRT). The k-d tree is a specialized data structure for computing k-nearest neighbors efficiently with up to about 20 dimensions.

Whenever you have a data structure that must handle relatively large numbers of elements, choose the simplest data structure that makes your most common operations take constant O(1) time. In general, the more complex the data structure or algorithm, the more overhead involved.

“Quiz”: Choosing a data structure

Choices:

  1. Static array
  2. Dynamic array
  3. Queue
  4. Hash Map
  5. B-tree
  6. Priority queue
  7. Linked list

Suppose that your robot has a lidar sensor it is using to localize, or find its location. The lidar provides a full scan at 20hz. Each scan contains 16 scan lines coming from the sensor at different vertical angles. Each scan line itself is composed of 900 ranges. Processing lidar data is relatively slow and may occur either faster or slower than 20hz depending on the scan and the accuracy of the robot’s prior on its location. Which data structure(s) do you use to hold this information? 11.6 Use p4r-clicker to submit your answer

Now suppose your robot is trying to determine if its lidar is broken. It is looking to see if there is consistency between the location estimates from its lidar and with its other sensors (odometry and visual landmarks). It does this by constructing a factor graph with nodes at locations the robot has been and at the locations of the visual landmarks. It then adds factors or edges to the graph that mark the xy-theta transformations between the various robot and landmark nodes. As the robot moves around, adding robot location nodes and sensor factors, it looks for closed cycles in this graph. When it finds a cycle, it combines the transformations in the factors of that cycle. If the overall transformation is close to unity, it considers those measurements consistent, since the loop ended up back where it started. If not, at least one of the factors/measurements used in the loop was likely inaccurate. Older nodes and factors are discarded once the number of nodes becomes large enough.

In summary, there are nodes and there are edges. What data structure(s) can we use to efficiently look for cycles? 11.7 Use p4r-clicker to submit your answer