TraderBot – A Discord Stock Trading Game

In a collaboration with AirmanEpic, we set out to create a Discord game bot that allows users to trade stocks virtually without risking real-world money. A game will run for a set amount of days, and at the end the player with the highest net-worth is deemed the winner.

The bot can be added to your own Discord server with this link.

Once the bot has been added you can begin a new game with the !start command. You also need to specify the number of days the game will run for, and the maximum number of players allowed to join.

Everyone else can join the game with the !join command. Something to keep in mind is that only one instance of the game can run per server.

Now let’s buy some stocks!

Stock prices are determined by the current market price. You can track these yourself, or use the !quote command to get the current price.

You can purchase a stock with the !buy command, specifying a stock and an amount.

To sell a stock, use !sell the same way.

To look at your current balance and your stock portfolio use !status or !statusfull.

!leaderboard gives you a list of all players and their net-worth.

When the game finishes, the player standings are revealed.

Here is a list of available commands (also accessible with !help):

  • !info
  • !help <command>
  • !start <days> <max players>
  • !join
  • !quote <stock>
  • !buy <stock> <amount>
  • !sell <stock> <amount>
  • !status
  • !statusfull
  • !leaderboard
  • !endgame

Photomosaic: An Image-to-Mosaic Generation Tool

Have you ever wanted to create a mosaic of a cat’s face with a collection of flags from around the world? “Photomosaic” allows that and more given an input image and image tiles.

You can download the Windows program here.

Photomosaic came about from a question – If I were to replace every pixel of an image with a small “tile” image with similar colors, would the resulting mosaic resemble the original picture? The task seemed simple enough, but resulted in countless hours of researching color matching science, designing graphical user interfaces, writing proper documentation, and so much more.

Task: From a sample image + tile images, output a mosaic

The program operates by going through each pixel in a source image and replacing it with a tile image of a similar color. But how exactly do you compare colors? We intuitively know that “green” and “lime” are more similar to each other than “green” and “red”, but how might we calculate the difference?

We know that each pixel has Red/Green/Blue values. A numerical difference in two RGB colors, 3-dimensional space, can be calculated with the Euclidean distance formula. This is akin to finding the hypotenuse of two points like back in grade school, but in three dimensions this time. It looks like this:

{\displaystyle d(\mathbf {p} ,\mathbf {q} )={\sqrt {(p_{1}-q_{1})^{2}+(p_{2}-q_{2})^{2}+(p_{3}-q_{3})^{2}}}.}
3-D Euclidean distance formula

Not too difficult! We also need to find the average color of each of the tile images. This is as simple as looping over every pixel, adding up the up the R/G/B values and dividing those by the total number of pixels. So for each pixel in the source image we can calculate the distance to each tile’s average color and choose the tile that is most similar (has the shortest Euclidean distance). Put them together and…

Not bad! I was surprised at how well it worked right off the bat. It matched the colors well enough and painted a picture fairly similar to the original. To keep the output image from becoming too large it was necessary to use a small input image (resized the input image to just 5% of its original size, 64×16 pixels).

Unfortunately everything up until now was the “easy” part. I wanted to improve the program, starting with color accuracy.

In the output image above, some areas look a bit strange. Take the sky on the top-left for example – The program decided to use dark green tiles for an area that is obviously blue. Why is this?

The answer lies in our biology. While computers use numbers to differentiate colors, humans perceive colors by a complex interaction of light on our eyes and signals to our brain. A more accurate color-matching solution requires we do so like our own eyes. Enter the International Commission on Illumination (CIE).

In 1976 CIE came up with the very first color space (CIELAB) for quantifying perceived color difference, as well as a formula for finding a distance (Delta E) between colors, shown here:

CIE76 color difference formula
CIE76 delta E formula

It looks a lot like the old Euclidean distance formula, with one key difference: The R, G, B colors are instead L, a, b colors. Conversion from RGB to LAB can be done in a separate formula.

A delta E distance is expressed as a number between 0 and 100. Any number less than 1 is an imperceptible color difference, while 100 means that the colors are completely opposite. This article explains delta E and color spaces much better than I ever could and I recommend that you check it out.

By instead recording the delta E distances between the (LAB converted) pixels of the input image and the (also LAB) average colors of the tiles, we get a more accurate output:

That seemed to do the trick – The sky is now blue, and as an added bonus the colors are much more vibrant. Let’s do it again with a larger input image!

Different algorithms produce different results. The mosaic above was done with CIE94, an upgrade to the CIE74 formula which takes into account lightness, chroma, and hue. If you try this yourself, keep in mind that the algorithm will not work well if the tiles provided are too few or have little color variance.

The more accurate algorithms were also much more computationally expensive, so I incorporated multi-threading into the application as well. All of the work done is by-pixel, so I could easily split the input image into equal parts, run the calculations on each part, and create the mosaic at the end.

I polished the program up by developing a graphical user interface with Windows Forms. I also decided to provide some proper documentation with a compiled HTML file (.chm) using WinCHM.

And that’s about it! It’s been a long ride, and one that I’ve quite enjoyed. I still come back every now and then to add new features or improve old ones. Give it a try for yourself, and let me know how it works for you. Thanks for reading!