I was wondering what is a good estimate for the upper bound on the number of operations, which can be done under a second. So I decided to test and come up with a result. But the more I test on different systems, the more confusing it becomes for me. Below is the code which I used for testing, and I found widely varying results:
#include <iostream>
#include <ctime>
using namespace std;
#define int long long
const int N = 9e6;
int a[N], b[N];
signed main()
{
int tt = 1000*clock()/CLOCKS_PER_SEC;
for(int t=1; t<=100; t++)
{
for(int i=0; i<N; i++)
a[i] = i%897896;
}
cout<<1000*clock()/CLOCKS_PER_SEC-tt<<"ms\n";
}
The results are as following. Note, I have a 64 bit Linux OS, with g++ 9.3.0
- On my system with g++ -std=c++17 :
2436 ms
- On my system with g++ -std=c++17 -static -O2 :
1551 ms
- On Codeforces Custom Test with C++17 :
4641 ms
- On Codeforces Custom Test with C++17 (x64) :
892 ms
- On Windows 8.1 (x64 VBox) with C++14 :
2015 ms
I wanted to ask, what is the reason behind such a drastic variation among different systems? Also, what should be a good estimate for the number of operations under a second?