Блог пользователя estoy-re-sebado

Автор estoy-re-sebado, история, 5 недель назад, По-английски

I was trying a problem related to a recent blog (1009F - Dominant Indices). When I submitted I got a compilation error with this message:

Can't compile file:
Compiled file is too large [34657593 bytes], but maximal allowed size is 33554432 bytes [CompileRequest {id='program.cpp', description='', file='program.cpp', resources='', type='cpp.gcc13-64-winlibs-g++20'}].

In this case I just resubmitted with a few different compilers until it worked. Any idea of what could be triggering this so I can avoid it during contests? (ik compilation error doesn't cost penalty but it is a time loss that could be prevented) I had never gotten this message before and I found it really strange because the source code is barely over 1.2kB and I'm not doing any heavy compile-time computations.

Submission: 285079574

Полный текст и комментарии »

  • Проголосовать: нравится
  • +21
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 5 месяцев назад, По-английски

I placed 54 among rated participants in the recent EDU 166 round. With a delta of +121, I reached 2127 points of rating. However, if you open my profile, it still displays the rank of Candidate Master, and my username is still shown in purple. This makes me believe that, along with the other people affected by this bug, we might be some of the strongest CMs of all time.

TL;DR: I have not been granted the rank of Master. This is outrageous. It's unfair.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +17
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 7 месяцев назад, По-английски

Hello, my team is going to ICPC World Finals in Luxor and we were wondering what the money situation is like.

We've heard that in Egypt (much like in my country, Argentina) there is a parallel currency market that offers much more favorable rates for those with foreign currencies. So, we have some specific questions (though any other information is very much welcome).

  • Is it easier to get good rates for US Dollars or Euros?
  • Where can we find the up to date official rate?
  • Where can we find up to date rates for the parallel market?
  • Where in Luxor can we exchange USD or EUR for EGP at the official rate?
  • Where in Luxor can we exchange USD or EUR for EGP at parallel market rates?

Thank you, everyone!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +74
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 9 месяцев назад, По-английски

I recently broke both of my hands, which forced me to type rather slowly, so I've had to come up with some strategies to optimize my contest performance and so on. To be honest, it wasn't that bad (and it's less noticeable the longer a contest is, as implementation speed becomes less relevant. For example, I got 11th place on a 5 hour individual contest at the Brazilian ICPC Summer School a week ago) but I'm still very happy to announce that they removed both my casts today and I'm finally back to "properly" coding using both hands.

I say "properly" in quotes because I'm not really back to 100% yet, as months of immobilization caused my muscles and ligaments to slightly atrophy, which causes some stiffness and pain when I move the affected joints. This is apparently easily solved by a month of rehab :^).

Also, because of the way the bone came back together, the index finger on my right hand is now slightly rotated towards my middle finger, which makes my index finger kinda bad for typing. This can be solved by further surgery or, apparently, by just twisting it into the proper alignment over the course of a year, which will slowly reform the bone into its proper shape. (I should be able to consciously keep it in proper alignment using my hand muscles after rehab, so it should be fine for practical purposes much sooner)

My rating will probably tank but I'm doing the upcoming div2 nonetheless (see you there!). After all, we have the ICPC LATAM super regional coming up in march, So I can't afford to stop training right now.

Finally I want to show some pictures of what the casts look like, the scars from the surgery (which I only had on my right hand), and a right hand x-ray I got today.

pictures

See you on the next round!

Cheers

UPDATE: rehab is progressing smoothly — I reached master for the first time in the last contest :^)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +218
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 10 месяцев назад, По-английски

A few minutes ago ICPC World Finals 2023 contestants (and presumably coaches and also 2022 people) received an email with the following content

19 January 2024

2023 ICPC World Finals Participant,

ICPC and AASTMT are thrilled to announce the new dates for the ICPC 2023 World Finals in Egypt: April 14-19, 2024.

Your updated invitation letter will be posted to your dashboard by end of day Tuesday, January 23, 2024. You will receive additional information on applying for your Egyptian visa at that time.

We are moving the location to a famous city on the Nile River. We are requesting additional flights which should be available when we publicly announce the official location no later than January 31, 2024. Please wait to book your flight until then.

The 2023 ICPC World Finals Organizers

So it looks like WF will be on-site and in Egypt :^)

UPDATE: No email was sent yet, but the ICPC website was updated with information about the location. ICPC WF 2022/2023 will be in Luxor!

UPDATE: An email was sent but it hilariously has the placeholder 'XXXX' instead of the actual city.

2023 ICPC World Finals Participants,

Your invitation to ICPC World Finals Egypt should appear on your dashboard over the next 48 hours! You will receive confirming email.

See you in XXXX, Egypt!

The ICPC World Finals Team

Полный текст и комментарии »

  • Проголосовать: нравится
  • +95
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 10 месяцев назад, По-английски

You might remember that I broke my right hand a while back. Well... I tripped on the stairs and broke my left hand now.

Since having two broken hands somewhat gets in the way of typing, I tried to use some hands-free voice-activated tool like Cursorless. Verdict: I but couldn't get used to it. Luckily I only broke my pinky finger, and can move my thumb, index and middle finger, so I am still able to type. Just even slower than before.

I did some rounds on a burner account, and I performed 200 rating points below my normal (I think my normal is around 1900, my current rating is uncommonly high).

At this point, my main bottleneck appears to be inputting code into the computer (it can take like 40 mins on some problems) so I've started experimenting with some less common strategies like starting with problem D and working backwards (in div 2). My reasoning is that, due to score distribution, it's better to run out of time to implement A and B, and only solve CD than running out of time on D and solving ABC.

It works quite well: after this change, it looks like I might be able to maintain 100 points below my normal. I will try starting with E next time, but I'm not always able to solve E, so it might not work.

Don't take anything for granted, especially your health! But remember that where there is a will, there is a way! I failed to get used to it but maybe you can tie both your hands behind your back and use Cursorless on the next rated round?

Cheers

Полный текст и комментарии »

  • Проголосовать: нравится
  • +194
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 11 месяцев назад, По-английски

Two weeks ago I broke my right hand. This is forcing me to code one-handed for the time being. One might think this is a total disaster, but I only perform 100-150 rating points below my normal (tested on a burner account), and it's getting closer the more I practice.

Some things I've noticed:

  • I type and edit code at roughly 35% my usual speed
  • during contests I usually fully solve a problem in my head but lack the time to implement it
  • I rarely find the simplest solution to problems that are hard for me but, if I did, I probably would have enough time to implement it

Some conclusions to draw:

  • if you are atrocious at typing, you can maybe get +100 rating points from improving it. If you're ok at it, don't bother.
  • finding simpler solutions leads to more solved problems which will increase your rating a lot more than speed

So, if you're looking for a challenge, join the club. Break your own hand with a hammer Tie one of your hands behind your back for the next rated round!

Cheers

Полный текст и комментарии »

  • Проголосовать: нравится
  • +148
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 13 месяцев назад, По-английски

TLDR: just take this blog of mine and sprinkle some canonical skew-binary numbers and boom, you have a weird version of the Fenwick tree that is asymptotically faster that the Fenwick tree on an obscure operation.

It also turns out that we can perform point updates without commutativity or inverse operations, which the plain Fenwick tree can't do by itself.

The idea

I explained on my previous blog that we can think of the Fenwick tree as a set of intervals, and that we can use those intervals to answer any range query in $$$O(\log(N)^{2})$$$. But there is no real reason to use exactly the same intervals that the Fenwick tree uses. Instead we can try to use some other set of intervals to make things more efficient (at least in an asymptotic sense).

Instead, we use intervals based on the jump pointers described on that one blog from the catalog. That blog claims that we can jump any distance from any node in $$$O(\log(N))$$$ by following a greedy algorithm (the same greedy I used to perform range queries on a Fenwick tree). It follows that we can use the same kind of thing to perform range queries in $$$O(\log(N))$$$ time.

To be clear I put things in bullet points:

  • We want to compute range sum queries on some array A
  • The data structure stores tree[i] = sum of A(i-jmp[i], i] (yes, an open-closed interval)
  • The jumps are defined recursively: if jmp[i] == jmp[i-jmp[i]] then jmp[i+1] = 2*jmp[i]+1 and otherwise jmp[i+1] = 1
  • The greedy algorithm to perform queries goes as follows:
int A[MAXN+1], jmp[MAXN+1], tree[MAXN+1];

int range(int i, int j) { // inclusive, 1-indexed
  int ans = 0;
  while (j-i+1 > 0) {
    if (j-i+1 >= jmp[j]) {
      ans += tree[j];
      j -= jmp[j];
    } else {
      ans += A[j];
      j -= 1;
    }
  }
  return ans;
}

The above algorithm performs any query in $$$O(\log(N))$$$ time. This is due to some properties of skew-binary numbers (trust me bro...). Sadly, I haven't figured out how to find the jumps on-line but we can just precompute them:

void init() {
  jmp[1] = jmp[2] = 1;
  for (int i = 2; i <= MAXN; ++i) {
    if (jmp[i] == jmp[i-jmp[i]]) {
      jmp[i+1] = 2*jmp[i]+1;
    } else {
      jmp[i+1] = 1;
    }
  }
}

Some observations:

  • An interval is either a single element or the concatenation of two intervals plus one more element
  • The intervals are well-nested and they have power of two (minus one) length
  • It follows that each position is covered by up to $$$\log(N)+O(1)$$$ intervals

Taking advantage of these facts, we will do updates by recomputing all the intervals that cover the updated position. In particular, we will first recompute the interval that ends at the updated position, then the smallest interval that strictly covers it and so on.

The implementation is straightforward: just update the array then walk the sequence of increasingly larger intervals and recompute them.

void update(int i, int x) { // assign A[i] = x
  A[i] = x;
  if (jmp[i] == 1) tree[i] = x, i = cover(i);
  for (; i <= MAXN; i = cover(i)) {
    tree[i] = tree[i-1 - jmp[i-1]] + tree[i-1] + A[i];
  }
}

It is possible to find the next covering interval on-line (thanks bicsi!). Remember that we construct an interval of length $$$2k+1$$$ exactly when there are two adjacent intervals of length $$$k$$$.

This tells us that if the interval that starts at position i-jmp[i] has the same length as the one that starts at position i (namely jmp[i] == jmp[i-jmp[i]]), then these two will be covered by an interval of length 2*jmp[i]+1 that ends at position i+1.

In the opposite case we must have jmp[i] == jmp[i+jmp[i]] (imagine the jmp array is infinite) so it will be covered by an interval that ends at position i+jmp[i]+1.

int cover(int i) {
  return jmp[i] == jmp[i-jmp[i]] ? i + 1 : i + jmp[i] + 1;
}

Conclusion

We have invented a Fenwick tree-flavored data structure that does $$$O(\log(N))$$$ range queries and updates, and works with non-commutative operations that don't have an inverse. I had never heard of this particular data structure so I will claim this is a completely new, never seen before data structure.

But surprise: it's dog slow in practice. Don't use it to perform range queries. But maybe the idea could be useful in some ad-hoc problem or whatever.

UPD: After some changes I measured this to be about as fast as a recursive segment tree. (I will admit I was very sloppy, my measurement was just submitting a problem on CSES: https://cses.fi/problemset/task/1648/)

Thanks to ponysalvaje for explaining me how Fenwick tree works.

Thanks to bicsi for the idea of how to implement non-additive updates and finding the covering intervals on-line.

Full code

Полный текст и комментарии »

  • Проголосовать: нравится
  • +55
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 13 месяцев назад, По-английски

This is not a useful blog, just something I noticed and thought to write down. It's probably been talked about before as well.

Background: Fenwick Tree

The Fenwick tree is a data structure that is commonly used to perform prefix sum queries and point updates on some array $$$A$$$. Leaving the issue of updates aside the general idea is that, for each index $$$i$$$, it stores the sum of A over the range $$$(i-f(i), i]$$$ (yes, that's an open-closed interval), where $$$f(i)$$$ returns the least significant bit of $$$i$$$.

To compute a prefix sum over $$$[1, i_1]$$$ we start with the interval $$$(i_2, i_1]$$$ (where $$$i_2 = i_1-f(i_1)$$$), then we add $$$(i_3, i_2]$$$ (where $$$i_3 = i_2 - f(i_2)$$$), and so on until we cover the entire range. Since each $$$i_{k+1}$$$ has one fewer bit than $$$i_k$$$, this can only happen $$$O(\log i_1)$$$ times in the worst case. Therefore any prefix sum query is computed in $$$O(\log n)$$$ worst case running time.

int fenwick_tree[MAXN];
int prefix(int i) {
  int x = 0;
  for (; i > 0; i -= f(i)) x += fenwick_tree[i];
  return x;
}

It is also easy to compute any range sum query over a range $$$[l, r]$$$ by taking the sum over the prefix $$$[1, r]$$$ and subtracting $$$[1, l-1]$$$.

int range(int l, int r) {
  return prefix(r) - prefix(l-1);
}

Generalizing

As presented above, instead of sums, we could compute prefix queries for any associative operation, and we could compute range queries for any associative and commutative operation that has an inverse. The requirement on commutativity can be easily removed at the expense of more memory usage and longer implementation. The requirement of an inverse, however, is inherent to the idea of computing ranges by using two overlapping prefixes but is not really necessary to achieve range queries in some other way.

Range queries without inverse

Given this data structure, there is a simple greedy algorithm for computing range queries over $$$[l, r]$$$:

  • if $$$(r - f(r), r]$$$ is contained in $$$[l, r]$$$, accumulate that range and then compute $$$[l, r-f(r)]$$$
  • otherwise, accumulate $$$A[r]$$$ and then compute $$$[l, r-1]$$$

It can be shown that this algorithm runs in $$$O(\log(n)^{2})$$$ in the worst case.

I will not write a proof but the intuition is that $$$f(i - f(i))$$$ is twice $$$f(i)$$$ (or zero). That means that in $$$O(\log n)$$$ steps we will cover any interval. We may overshoot, but by that time we will have covered at least half of the interval already, so this process can only happen $$$O(\log n)$$$ times.

The code looks something like this:

int range(int l, int r) {
  int x = 0;
  while (r-l+1 > 0) {
    if (f(r) <= r-l+1) {
      x += fenwick_tree[r]; r -= f(r);
    } else {
      x += A[r]; r -= 1;
    }
  }
  return x;
}

I used the sum operation for clarity, but it also works for any associative operation with an identify element.

int range(int l, int r) {
  int x = IDENT;
  while (r-l+1 > 0) {
    if (f(r) <= r-l+1) {
      x = OP(fenwick_tree[r], x); r -= f(r);
    } else {
      x = OP(A[r], x); r -= 1;
    }
  }
  return x;
}

For instance, we could use it with min operation:

#define IDENT INT_MAX
#define OP min

Conclusion

The Fenwick tree can perform arbitrary range queries on associative operations but if we want to do any kind of updates at all, we will need commutative operations. Also, if we want to perform point-assignment updates -- which is the most common for RMQ among others -- we would need an inverse operation.

It has slow complexity compared to e.g. segment trees, and in practice i tested it to be even slower that the obvious sqrt-decomposition approach.

The only situation where I would use this is if we were forced to use the input array in-place, as the Fenwick tree uses exactly N words of memory and is not that hard to build in-place over an existing array.

Thus I conclude that this is not a useful data structure at all.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +151
  • Проголосовать: не нравится

Автор estoy-re-sebado, история, 17 месяцев назад, По-английски

There once was a man named Hugo Ryckeboer, and he came up with a neat way to do static fixed-length RMQ queries in $$$O(1)$$$.

After playing around with it, I was able to extend it to do range queries of arbitrary associative operations in $$$O(1)$$$, with $$$O(N \log N)$$$ precomputation. This is the same time complexity achieved by sparse table with idempotent operations, but my data structure reaches this bound for any associative operation.

EDIT: this data structure is called "disjoint sparse table"

Let's go through some background first.

Prefix sums

Take some array A and compute prefix sums in array P.

A = [ 1,  3,  6,  5,  2,  7,  1,  4]
P = [ 1,  4, 10, 15, 17, 24, 25, 29]

Now we can compute range sum queries in $$$O(1)$$$ by substracting a prefix from another.

For example, query [2, 7) is $$$25 - 4 = 21$$$.

implementation details

Very neat, but we need an inverse operation (substracting, in this case).

Postfix-Prefix Sums

Instead, take the array A and compute a suffix sums array S for the first half and a prefix sums array P for the second

A =   [ 1,  3,  6,  5,  2,  7,  1,  4]
S,P = [15, 14, 11,  5] [2,  9, 10, 14]

Now we can compute queries by adding a suffix of the first half and a prefix of the second half.

For example, query [2, 7) is $$$11 + 10 = 21$$$.

implementation details

Very cool, but queries must cross the halfway point of the array.

Hugo Trick

Instead, take the array A, cut it up into intervals of length k, and compute the prefix and suffix sums of each one.

k = 3
A =  [ 1,  3,  6,  5,  2,  7,  1,  4]
P = [[ 1,  4, 10][ 5,  7, 14][ 1,  5]]
S = [[10,  9,  6][14,  9,  7][ 5,  4]]

Now we can answer any query that crosses from one interval to the next by adding a suffix of the first interval with a prefix of the second

In particular, notice that we can compute all queries of length k using that idea.

For example, we can now answer [5, 8), whereas before we could not, as it doesn't cross the halfway point of the array.

implementation details

And that is the entire trick that I was taught years ago. In my country we call it "DP Hugo" or "Hugo Trick".

Very rad, but it is limited to fixed-length queries.

Enhanced Hugo Trick

Let's add the capability for some different-length queries. Take the array A, and cut it up into overlapping intervals of length 2k, spaced by k positions. Then, compute prefix and suffix sums for each one.

k = 3
A =  [ 1,  3,  6,  5,  2,  7,  1,  4]
P = [[ 1,  4, 10, 15, 17, 24][ 1,  5]]
     [ 1,  4, 10][ 5,  7, 14, 15, 19]
S = [[24, 23, 20, 14,  9,  7][ 5,  4]]
     [10,  9,  6][19, 14, 12,  5,  4]

Now, we can still answer any query that crosses from one interval to another, but that now includes all intervals of length between k and 2k (inclusive).

implementation details

Very cute, but still limited to some specific lengths of queries.

Multilevel Hugo Trick (Disjoint Sparse Table)

Instead, create $$$\log N$$$ instances of the data structure described above, with k=1,2,4,8,16,..., then each one can handle queries lengths from one power of two to the next.

Now, given some query, you can take the logarithm of its length to know which instance will be able to handle it. The logarithm can be taken in $$$O(1)$$$ using some bit manipulation. Then, we can answer the queries in a single operation.

Conclusion

The examples use range-sum-queries, but this will actually work for any associative operation (I also use identity element to make implementation easier, but it is not strictly required).

Therefore, we can answer arbitrary range queries in $$$O(1)$$$. More precisely, we can answer range queries with exactly one call to the operation per query.

code:

#define OPER add  // associative operation
#define IDEN 0    // identity element

int add(int a, int b) { return a + b; }

struct hugo_block {
	vector<int> left, right;
	int cut;

	hugo_block(vector<int>& data, int k, int _cut) : cut{_cut} {
		right.push_back(IDEN);
		for (int i = 0; i < 2*k && cut+i < sz(data); ++i)
			right.push_back(OPER(right.back(), data[cut+i]));

		left.push_back(IDEN);
		for (int i = 0; i < 2*k && cut-i-1 >= 0; ++i)
			left.push_back(OPER(data[cut-i-1], left.back()));
	}

	int query(int l, int r) {
		return OPER(left[cut-l], right[r-cut]);
	}
};

struct hugo {
	vector<hugo_block> blocks;
	int k;

	hugo(vector<int>& data, int _k) : k{_k} {
		for (int b = 0; b*k <= sz(data); ++b) {
			blocks.emplace_back(data, k, b*k);
		}
	}

	int query(int l, int r) {
		return blocks[r/k].query(l, r);
	}
};

struct DisjointSparseTable {
	vector<hugo> hugos;
	int levels;

	DisjointSparseTable(vector<int>& data, int _levels) : levels{_levels} {
		for (int level = 0; level < levels; ++level) {
			hugos.emplace_back(data, 1<<level);
		}
	}

	int query(int l, int r) {
		if (r == l) return IDEN;
		int level = bit_width(unsigned(r-l))-1;
		return hugos[level].query(l, r);
	}
};
A Nicer way?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +57
  • Проголосовать: не нравится