Блог пользователя Suleyman.A

Автор Suleyman.A, история, 4 года назад, По-английски

Hi, I want to know the time complexity of this code

#include <bits/stdc++.h>

#define N 100010

using namespace std;

int T[N*4];

int main()
{
	memset(T, 0x3f3f3f3f, sizeof(T));
}

Many people say that memset's time complexity is O(logN), but my opinion is O(N).

Is O(logN) right? If not, is there any way to do that operation in O(logN)?

Please write your opinions.

  • Проголосовать: нравится
  • +10
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Haven't you asked those people who say O(logN) to explain how? It's O(N) in my opinion.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

It's $$$O(n)$$$, but faster because it has a smaller constant factor.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    What do you mean by O(n), but faster? Do you mean 'with smaller constant factor'?

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Yes. Memset has a smaller constant factor because the compiler does more optimisations.

»
3 года назад, # |
  Проголосовать: нравится -13 Проголосовать: не нравится

i was solving https://codeforces.net/contest/1549/problem/D this problem and was continuously getting tle i kept o changing the code then i removed memset and it got an AC

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится

    That's because you're using memset on the whole array each time you're solving for a test case, which leads to a huge number of operations. You should prefer using std::vector of a precise number of elements unless the time limit is very strict. memset has quite a few demerits (filling happens byte-by-byte so you can only fill stuff that is periodic in terms of bytes and you might forget multiplying by the correct size of the datatype in bytes and so on), so if you really want to fill a range with something, consider using std::fill instead.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    This is often the case when there number of test cases is large and maximum n is large as well, e. g. $$$T, \max{N} \leq 2 \cdot 10^5$$$. I used to make this mistake of using memset on the complete array for every test, which leads to $$$O(T \cdot \max{N})$$$ complexity just to memset the array, but if you do it only up to the inputted $$$N$$$, as $$$\sum{N} \leq 2 \cdot 10^5$$$ the problem disappears and leads to the complexity of $$$O(\max{N})$$$. So there's a reason you often see this line "It is guaranteed that the sum of $$$N$$$ over all test cases is less than $$$2 \cdot 10^5$$$"

    EDIT: Seems like nor was a bit quicker than me on this one.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I have there two macros in my template.

    #define init(arr, val) memset(arr, val, sizeof(arr))
    #define bckt(arr, val, sz) memset(arr, val, sizeof(arr[0]) * (sz+5))
    

    is perhaps a scuffed way of going about it. But if I called

    init(arr, 0);
    

    I would make the entire array $$$0$$$. but if I called

    bckt(arr, 0, n);
    

    I would initialize the first $$$n+5$$$ values to be $$$0$$$. I also set my array max constants to be max $$$+10$$$ to avoid collisions.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится -41 Проголосовать: не нравится

...

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

    No. You set O(N) bytes of memory to a certain value, you can't do that faster than in O(N).

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    It seems that you don't understand what big $$$O$$$ notation means. If some operation has $$$O(n)$$$ time complexity, it means that the time it takes to do the operation scales linearly when the size of the object ($$$n$$$) grows. For example an $$$O(n)$$$ operation takes approximately twice as much time to do with a list twice as long.

    If you double the lenght of the array, memset takes approximately twice as long to excecute (regardless of the original array length). Thus, it is $$$O(n)$$$.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Theoretically, you can achieve $$$O(\log n)$$$ if you have a stupid amount of threads ($$$O(n)$$$): [link](https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)#/media/File:Binomial_Tree_Broadcast.gif)