Tutorial about time and memory complexity

Правка en5, от Pa_sha, 2024-10-22 17:05:53

Motivation

I saw that many begginers don't really understand how to compute complexity and make lots of some mistakes in calculations. Even more skilled programers can have mistakes in this topic. I will write more intuitive way to understant time and memory complexity and this will be the main focus of this blog. There will also be more formal way, but in the spoiler.

What is complexity

In simple words, complexity is just number of operations that the program make. For example, if you have a cycle from 1 to 1000 with step 1, it is 1000 operations and if there you make one addition and one multiplication, it is already 3000 (since you make this two operations 1000 times). For memory it works in the same way. If you make an array of 100 elements, it has memory complexity 100. But in fact, it is hard to calculate complexities up to 1 in general case, while it is also not really needed, since compute makes 1 operation really fast. In the same way, 10 operations or 1000 operations. In fact, when we calculate complexities we do not look at constant. It means, that n operations (where n is some variable and has some numeric value) same as 2*n operations, or n+100 operations is same as n operations. That is same as when we made 1647859 operation, we can just say that we made $$$10^6$$$ operations which is more easier to work with. The reason for it is that it is easier to calculate complexities, since you do not need to look for each operation. Also, since programs use variables which have some range, we will use variables as well.

Types of compexitites

  • $$$O$$$. This notation is used when we want to tell that our program make at most this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$O(n)$$$, but also has complexity $$$O(n^2)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\le b$$$.

  • $$$o$$$. This notation is used when we want to tell that our program make less then this complexity. It means, that if your program have one loop which takes n operations, it can have complexity $$$o(n^2)$$$, but also can have complexity $$$O(n^9)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a<b$$$.

  • $$$\Theta$$$. This notation is used when we want to tell that our program make same number of operations as this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Theta (n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a=b$$$.

  • $$$\Omega$$$. This notation is used when we want to tell that our program make at least the same number of operations as complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Omega (n)$$$ or it is also $$$\Omega (1)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\ge b$$$.

  • $$$\omega$$$. This notation is used when we want to tell that our program make more operations then complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\omega (1)$$$ or it is also $$$\omega (log(n))$$$, but not $$$\omega(n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a> b$$$.

Formal definition (not needed for understanding)

In the remaining blog I will use only big O notation, since it is, in my opinion, the most important and popular.

Evaluating complexities

There are some rules for evaluating complexities and here are some of them:

  • $$$O(C)=O(1)$$$ for constant C and variables x and y.

  • $$$O(x)+O(C)=O(x)$$$ for constant C and variables x and y.

  • $$$O(x)+O(y)=O(x+y)=O(max(x,y))$$$ for variables x and y.

  • $$$O(x)\cdot O(y)=O(x\cdot y)$$$ for variables x and y.

  • $$$O(x)+O(x)=O(x)$$$ for variable x.

Example 1
Example 2
Example 3

As we see, computing complexity is same as just solving some math inequality. This is why we need to memorize some cases like harmonic series, to not have wrong thoughts about the complexity.

More advanced examples

There is popular algorithm named divide and conquerer. Using this algorithm, it is hard to analyze the code complexity. For example,

Code 1

One can note that it is just binary search algorithm. But in case we would not have if condition but go recursevily into both find fuunction, it seams that algorithm will work with really big complexity, but in fact it will work in $$$O(n)$$$. To analyze this code, we need additional notation. It is, $$$T(n)$$$. Really similar to $$$O(n)$$$ and in fact it is the same. It is needed to not have confusion when solving recursive time complexities. For example, we can denote first find function calling (from main) that it is $$$T(n)$$$, since we are looking for a value on $$$n$$$ other values. Then, we make number of values half of the number it was and just one additional operation to know which value to go. So, $$$T(n)=T(\frac{n}{2})+c$$$. Here, c is some constant. We will use it but not $$$O(1)$$$ and we will see why in a second. We can try to solve this just by iterating recursion. What I mean is that if $$$T(n)=T(\frac{n}{2})+c$$$, then $$$T(\frac{n}{2})=T(\frac{n}{4})+c$$$ so $$$T(n)=T(\frac{n}{4})+2\cdot c$$$. Using this, we can see that we can performe such operation $$$log(n)$$$ times, so $$$T(n)=T(\frac{n}{2^{log(n)}})+log(n)\cdot c$$$ and since $$$T(1)=1$$$, $$$T(n)=log(n)\cdot c=O(log(n))$$$. Here is why we used constant but not $$$O(1)$$$. It is because when we apply recursion, we get $$$2\cdot c$$$, which is also $$$O(1)$$$. In other words, it would be the same mistake as it was in the third example of previous section. Now, let's look at this code:

Code 2

One can see that it just finds sum of whole array, so it is working linearly, i.e. $$$O(n)$$$, but we will get to it. So, here $$$T(n)=2\cdot T(\frac{n}{2})+c$$$ since we also make constant number of operations but go in recurssions two times but not 1 as in previous example. Also note, that $$$T(1)=1$$$ since if there is only 1 element we return it. So, if we apply recusion $$$T(n)=2\cdot T(\frac{n}{2})+c=4\cdot T(\frac{n}{4})+2\cdot c=2^x\cdot T(\frac{n}{2^x})+x\cdot c$$$. We end our recurssion when $$$n=1$$$, i.e. $$$x=log(n)$$$, so $$$T(n)=n+log(n)\cdot c=O(n)$$$. So, it works in linear time. We can also say that each element was checked one time and since number of elements is linear, code also works linear. This intuition is good enough, but sometimes it can be bad.

Here are some tasks for you if you want to practice with it:

Spoiler

Also, there exists Master theorem, which basicly solve some instances of the problem. There are not all of them, but in practice that is enough. Also, there is geeralization of it which is almost useless in practice, hard to understand but cool in some sence. So, here is master theorem. It is solving all recurences which looks like this: $$$T(n)=a\cdot T(\frac{n}{b})+c\cdot n^d$$$ for any a, b, d.

История

 
 
 
 
Правки
 
 
  Rev. Язык Кто Когда Δ Комментарий
en9 Английский Pa_sha 2024-10-24 15:35:55 1202 (published)
en8 Английский Pa_sha 2024-10-23 18:06:14 6
en7 Английский Pa_sha 2024-10-23 00:53:21 1314
en6 Английский Pa_sha 2024-10-23 00:41:36 2144 Tiny change: 'on value. ' -> 'on value. \n\n### **How to use it**\n\n'
en5 Английский Pa_sha 2024-10-22 17:05:53 3859
en4 Английский Pa_sha 2024-10-08 01:42:49 625 Tiny change: 'frac{n}{i}.\n\n</spo' -> 'frac{n}{i}$.\n\n</spo'
en3 Английский Pa_sha 2024-10-07 14:50:25 9766 Tiny change: 'nding)">\n</spoile' -> 'nding)">\n- $O$: $f(x)=O(g(x))\Longleftrightarrow$\n</spoile'
en2 Английский Pa_sha 2024-09-29 20:16:09 882
en1 Английский Pa_sha 2024-09-26 19:10:14 2774 Initial revision (saved to drafts)