Hi, I want to know the time complexity of this code
#include <bits/stdc++.h>
#define N 100010
using namespace std;
int T[N*4];
int main()
{
memset(T, 0x3f3f3f3f, sizeof(T));
}
Many people say that memset's time complexity is O(logN), but my opinion is O(N).
Is O(logN) right? If not, is there any way to do that operation in O(logN)?
Please write your opinions.
Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).
Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).
Haven't you asked those people who say O(logN) to explain how? It's O(N) in my opinion.
Seriously no, I don't asked them.
It's $$$O(n)$$$, but faster because it has a smaller constant factor.
What do you mean by O(n), but faster? Do you mean 'with smaller constant factor'?
Yes. Memset has a smaller constant factor because the compiler does more optimisations.
i was solving https://codeforces.net/contest/1549/problem/D this problem and was continuously getting tle i kept o changing the code then i removed memset and it got an AC
That's because you're using
memset
on the whole array each time you're solving for a test case, which leads to a huge number of operations. You should prefer usingstd::vector
of a precise number of elements unless the time limit is very strict.memset
has quite a few demerits (filling happens byte-by-byte so you can only fill stuff that is periodic in terms of bytes and you might forget multiplying by the correct size of the datatype in bytes and so on), so if you really want to fill a range with something, consider usingstd::fill
instead.This is often the case when there number of test cases is large and maximum n is large as well, e. g. $$$T, \max{N} \leq 2 \cdot 10^5$$$. I used to make this mistake of using memset on the complete array for every test, which leads to $$$O(T \cdot \max{N})$$$ complexity just to memset the array, but if you do it only up to the inputted $$$N$$$, as $$$\sum{N} \leq 2 \cdot 10^5$$$ the problem disappears and leads to the complexity of $$$O(\max{N})$$$. So there's a reason you often see this line "It is guaranteed that the sum of $$$N$$$ over all test cases is less than $$$2 \cdot 10^5$$$"
EDIT: Seems like nor was a bit quicker than me on this one.
I have there two macros in my template.
is perhaps a scuffed way of going about it. But if I called
I would make the entire array $$$0$$$. but if I called
I would initialize the first $$$n+5$$$ values to be $$$0$$$. I also set my array max constants to be max $$$+10$$$ to avoid collisions.
...
No. You set O(N) bytes of memory to a certain value, you can't do that faster than in O(N).
It seems that you don't understand what big $$$O$$$ notation means. If some operation has $$$O(n)$$$ time complexity, it means that the time it takes to do the operation scales linearly when the size of the object ($$$n$$$) grows. For example an $$$O(n)$$$ operation takes approximately twice as much time to do with a list twice as long.
If you double the lenght of the array, memset takes approximately twice as long to excecute (regardless of the original array length). Thus, it is $$$O(n)$$$.
Theoretically, you can achieve $$$O(\log n)$$$ if you have a stupid amount of threads ($$$O(n)$$$): [link](https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)#/media/File:Binomial_Tree_Broadcast.gif)