The Art and Science of Trading

While going through the podcasts at Chat with Traders I came across an interview with Adam Grimes. While checking out his website I noticed he offered a free trading course. It can be found at I have not got through it all yet, but there is a wealth of solid information. I liken it to Andrew Ng’s machine learning course on Coursera. Well presented and broad introduction to trading with something for everyone at every level. I really like his quantitative approach to determining what work across each market. I find the self analysis harder, but can accept it is necessary.

I have also been enjoying the podcasts at Better System Trader.

Neural Networks and GPU’s

Having recently upgraded to a GeForce GTX 580 I was amazed at the 30x speedup. I have been using a variation on Alex Krizhevsky’s cuda-convnet for a number of Kaggle competitions. The next step for me will be to add another card and use Alex’s updated cuda-convnet2 that supports multiple GPU’s. I like the idea of scaling up rather than scaling out. There is a good paper on this by  Adam Coates et  al.

There are a number of sites with good information on choosing a GPU for machine learning. The first one I came across at FastML called “Running things on a GPU”. Recently I came across another site with good advice also. The page is “Which GPU(s) to Get for Deep Learning”.

Here is a link to a benchmark of a number of the open source convnets.

Udacity has a good introduction to Cuda programming called “Intro to Parallel Programming”.

How big is your Data

A while back I read an interesting and refreshing article by Chris Stucchio looking at what was “Big Data”. With all the hype around Big Data and Hadoop most organizations are wanting to get on the bandwagon. But how much data constitutes Big Data and is Hadoop always the answer.

Chris’s article can be found at – Don’t use Hadoop – your data isn’t that big

At the end of the article Chris recommends Scalding. As a Clojure fan I would have to mention Casclog.

Quants, HFT and the Flash Crash

Here are a few of my favorite documentaries on  quants, HFT and the flash crash.

“A quant is a person who specializes in the application of mathematical and statistical methods – such as numerical or quantitative techniques – to financial and risk management problems.” wikipedia

At around the 24 min mark Haim Bodek explains the real advantage that those in the know have.

An interesting look at the flash crash.

Functional Data Structures in C++

Have started my Clojure journey I was really interested to see a series of posts on immutable data structures in C++. While I can explore Clojure in my time, my day job require the use of C++ with a little java and python. While I probably won’t get to use these at work, I’m sure I will find a project where they make sense.

The two data structures covered are:

Bartosz also explores the role of immutable data structures for concurrency in the post “Functional Data Structures and Concurrency in C++

HFT Technology and Algorithms

As an engineer I marvel at what has been achieved. As an fx trader I lament the lack of volatility. The trading platform I use has an internal latency of about 100 ms these guy are sub 1 modifying kernels, router firmware and using FPGA’s.

Barbarians at the Gateways – A former engineer/trader gives a tour of the HFT technology stack.

Online Algorithms in High-frequency Trading – Explore the use of one pass algorithms in HFT.

My favorite Rich Hickey talks

Rich Hickey always delivers a well presented thoughtful talk. While he is the creator of the Clojure programming language, most of these talks are language neutral, except for “Are We There Yet?”.

Design, Composition and Performance – Who else can use Coltrane and Bartók to talk about engineering software. I really like the analogies he uses in this talk.

Hammock Driven Development – How Rich goes about solving difficult problems. If you only watch one talk this would be my recommendation.

Are We There Yet? – In his keynote at JVM Languages Summit 2009, Rich Hickey advocated for the reexamination of basic principles like state, identity, value, time, types, genericity, complexity, as they are used by OOP today, to be able to create the new constructs and languages to deal with the massive parallelism and concurrency of the future.