Solomon

Algorithm graph

As a broke college student I was always looking for ways to make passive income. This, along with an interest in exploring more deep learning, was the inspiration for Parzival, my algorithmic trading bot. The algorithm has gone through 13 iterations. The current version has shown the ability to reach 30-40% annual returns in backtesting although the current deployment is too early to have conclusive results.

In addition to the algorithm itself, I have developed a library of robust backtesting and training tools in order to train and test the design. This Python library implements tools for gathering data, simulating trades (with probabilistic execution times), plotting, and financial analysis, all in a manner that is optimized for performance.

The library also includes a custom API that I developed that executes and manages live trading. The platform is a mirror of by backtesting process, which makes it extremely easy to take a backtested algorithm and deploy it onto a real trading platform. The data is saved to a database that is served to my live React site in order to monitor the progress in real time.

Although I have already spent over 1,000 hours on this project, it is still in the early stages of development. Not only am I excited by the prospect of having this make passive income, I have also dramatically increased my understanding of deep learning and my comprehension of modern reinforcement techniques which I intend to use for my future projects.

Self-Driving 2D Car

Self driving car line visualization

When Tesla's self-driving cars were first gaining popularity in California, I was fascinated by the deep tech machine learning involved. After diving into the rabbit hole of research, I came across deep Q learning and wanted to write my own implementation for a self driving car utilizing this reinforcement learning framework. My simulation capabilities at the time were not quite up to par and so I settled on developing a 2D car simulation. The simple simulation implemented the speed and acceleration for the car, as well as collision detection for the car with the walls.

For the inputs of my Q learning network, I drew lines extending from the car in various directions and the input would be the distance along that line from the car to the nearest wall. This would allow the car to learn how best not to crash. The reward function for the network consisted of a reward for the forward speed of the car, along with a negative reward for the amount of turning the car made. This second term was implemented because the car would avoid the walls but not stay in the center of the road.

After many hours of training, the car achieved quite good performance, and would adapt well to various tracks that I built for it. I wish I had the understanding of RL that I have today, because I know I could have made it much better with implementations of prioritized replay and double Q learning. Regardless, this was an excellent introduction to reinforcement learning and gave me the basis for the development of my trading bot.

Self-driving car gif


ML Snake

My first introduction into neural networks was my attempt to build the arcade game Snake. Freshman year of college I was paying attention to the growing field of neural networks and was looking into a good introductory project. I settled on Snake out of my own naivety, but it ended up building a great base understanding and growing my passion for the field. The twist on this particular project however, was my decision to implement the neural network entirely from scratch.

That's right, no tensorflow or pytorch, I was dumb enough to decide to implement the network, the optimization, and the backpropogation with numpy arrays and a LOT of math.

Snake Gif

At this point in my career I had never touched a neural network, so I spent 2 or 3 weeks researching the subject, soaking up as much knowledge as possible about the math behind the machine. My implementation was a 4 input, 3 output network with a single hidden layer. The inputs were: the distance to the nearest wall (or to the snakes body) for each direction (forward, left, right) and the euclidean distance to the 'food'. The outputs were discrete values for the snake moving forward, left, or right. I hand-coded the Adam optimizer and all the relevant math for backpropogation for network training.

Unfortunately, my lack of understanding of the machine learning landscape led to me choosing a project that would have reached much better performance with reinforcement learning. However, this project bootstrapped my knowledge and gave me a much deeper understanding of the grounding of neural networks.