Briefly
I am preparing corporate training in form of bots challenge. Currently I need to test it with real people — so I am seeking for about 20 volunteers. To bring some motivation I propose the small prize of $100 for winner.
To participate you should have basic skills in javascript (you can learn it sufficiently in an hour if you are using C++ or Java usually).
Here is project page — you can download game files and tool for challenging bots locally. You also can register an account here (see notes below).
More info
As I told, it is somewhat resembling contest or challenge of small programs written in javascript — we call them Bots. Bots should play the game against each other. Game is as follows:
Two players throw the single die (cube with numbers 1..6 upon its sides). Each could throw die as many times as he want unless 6 is cast. If player stops at his will and passes turn to opponent, all points cast in this turn are added to his score. If 6 was cast, turn is passed to opponent and nothing is scored to player for this turn. One who scores 100 or more wins. More details on rules you can find in help at the project page.
Currently I suppose to select the winner according to final rating with some lottery-like process provided that participants with higher rating have more chances to be selected (e.g. if player A has rating 400 points more than player B, his chances are 10 times more). This process is described too, though it could be discussed and changed (before final stage starts).
I plan that final stage will be held on Nov 24 2012, though date could be shifted earlier or later for participants comfort — it could be discussed too.
Registration
You can register account at the project web page. Or you can write me your e-mail and I'll create for you an upgraded account. If you registered account already you could also "request an upgrade" on profile page. Non-upgraded accounts are not used when selecting winner, so you can register additional accounts for testing purposes. It would be good if your main account have the same name as one at codeforces.
Testing
Please write me about any bugs (malfunctions) or holes (potential security vulnerability) you find. We suppose that if some problem is found that could not be fixed at all and prevent our little test-contest from being finished, prize will be awarded to one who found this problem.
Feel free to ask any questions here or privately.
Hello,
I did not find any contact information on the page, so I'm posting something here.
I registered successfully with the system but I did not receive the confirmation email containing the password. I tried to request a new one, via the "Request a new password" feature but I still did not receive it.
My username is "0x1337" on the system
Could you help me out please? Thanks
EDIT: Found it in my gmail account Spam folder, so it did not get forwarded to my Yahoo account.
Hi!
I am sorry — possibly it is because I currently use mail account of not very respectable server.
In case of any such problem always feel free to write me private message on codeforces and I will set password for you manually.
Ok, thanks for your support!
Dear friends who are currently helping me in this testing!
Thanks for your participation. I have already discovered a number of bugs (and one incorrect submission even made judge halt until I fixed and improved it). I am sorry that proposed "dice game" is somewhat dull and boring, perhaps without much fun. However it allows me to test problems with rating calculation when bots are close in strength to each other.
That is the thing I am currently to explain (and possibly to have some ideas from you).
You see there is some instability of ratings. That is the thing I want to discuss in hope that any of you can give some advises. Shortly speaking the problem seems as follows: rating have fluctuations — small random changes which tends to neutralize over time. But when solutions are close to each other in their relative strength, these fluctuations may become bigger than real difference of ratings between solutions.
Rating is calculated as wikipedia article describes, i.e. after each battle between two players we perform the following arithmetics:
Any player at beginning have rating of 1200 and when player is submitting new solution his rating is not changed (though if solution is stronger than previous, rating will grow after several games etc.)
Rating shows that if players have some difference of ratings, the stronger one is expected to win 10^(difference / 400) times more, roughly speaking. I.e. player with 1400 wins 10 games and loses 1 against player with 1000.
"kfactor" is a coefficient showing how much the rating may change during one game... I believe that final rating (over many number of games) does not depend on this value (am I wrong?)
Precisely the first idea was as following. Let us use bigger kfactor for newly entered solutions, so that they may sooner reach close to their real rating level — and use smaller kfactor for solutions which have played many games already, so that rating change for them would be more fine, without rough random changes.
At first I used the following simple (and somewhat stupid) formula:
I've noted rating fluctuations which make neighbor players to change their places sometimes. I understand the reason as following:
if A and B have close ratings, for example 1499 and 1501 then (if game is not drawn) each of them have almost equal chances to win. But if one wins, rating change is 2 points. And if one (due to stochastic fluctuations) wins 3...5 times more than other after 20-30 games, his rating will add up to 10 points while his opponent loses these 10 points and they have about 1510 and 1490. Surely this tends to neutralize over time and on the other hand difference of 20 points is almost nothing (this mean winning of about 8% of games more I think). However, this is annoying and confusing.
Yesterday at the morning I've done the following changes to kfactor evaluation:
I wanted kfactor to change even less when many games are played. However, it looks that things became even worse then before. :)
Currently I am going to revert this change back.
The other thing I've tried to change was to change "world.js" so that result 6:4 or 4:6 is not a draw, but normal win or lose. I thought this should make difference of strength more acute. This will remain for some time more, I think. (changed version is uploaded to contest page so you can fetch it).
UPD: I forgot to tell, but it is supposed that to get final ratings we shall fix all submissions and reset all ratings (to 1200) and game counts (to 0) and then number of games will be played so that each player participates in about 1000 games. (the only significant change proposed here is that each player in each game have more chances to play with ones who is closer to him by rating)