Python Development Environment / Jack's Technology Stack

FTMO Trader Scouting

jack

Administrator
Staff member
Intro

I thought I'd share my technology stack for strategy and algo development.

My primary language is Python. Despite some uninformed beliefs that Python is too slow for algo trading, and that algorithmic trading is best left to C/C++ or some hardware programmed FPGAs, Python is perfectly suitable and more than fast enough for any retail trader who wants to get into algorithmic trading. The simple fact is, unless you're competing in the high frequency speed game where a pico-second could be the difference between profit and loss, Python and most other modern scripting languages will be fine. (And heck, if you are competing in that space, you wouldn't be reading this anyway.)

(Please note: This will not be an intro to Python, and I'm expecting people to be reasonably familiar with the basics of Python before being able to gain from reading the rest of this post.)

Table of Contents:
  1. The Language --- (with update here)
  2. The IDE and Version Control System
  3. Commercial Third Party Software - eSignal 12
  4. Commercial Third Party Software - NinjaTrader 8 (+ZeroMQ DLLs)
  5. Commercial Third Party Software - MetaTrader 5 (+ZeroMQ DLLs)
  6. The Hardware
  7. The Python Based Algo Platform
  8. Misc. IEX Data Tools
  9. Interesting Problems I Solved with Python [coming soon]
 
Last edited:
The Language

Moving forward, Python from Python's main website (http://python.org) comes with a bunch of really useful built in libraries, some that I employ daily are:
  • threading - for concurrency
  • logging - for keeping track of what a large, threaded, multi-service, multi-client application is doing. Outputting to console and log files. Custom adapters for formatting. Everything is tracked so finding bugs in code / logic is easy later on.
  • xml - for reading and writing settings that my programs save based on user input.. JSON would be another alternative to this.
  • datetime - I work with a lot of time series data (transaction feeds, quote feeds, etc..) so that requires converting a time string (ie: "14:22:12.436" into a binary format that a computer can easily compare to other points in time
Of course there's many other common libraries included with Python that are worth mentioning, but I'm trying to keep this concise.

Not all essential libraries are included with the official distribution of Python though.. and for that reason, I use a third party's Python distribution that bundles a lot of the common add-ons I'd otherwise have to download separately:


The extra Python Libraries I use extensively, that do not come with the official Python install, but are bundled with Anaconda's distribution are:
  • numpy - Fast array processing for data, vectors calcs, etc..
  • pandas - table based data processing, organization, and display. I use this mostly analysis and data display since it has a great Excel output module.
  • scikit-learn - I'm still just scratching the surface of machine learning but this package was a great place to start. More on this later.
  • zmq - ZeroMQ - IPC messaging between processes / threads, and between computers on the network.
  • PyQT4 - for graphical interfaces. More on this later.
In addition to the vast library that's included with Anaconda's Python distribution, there are a few remaining misc packages I add manually:
  • pyqtgraph - extending the PyQT4 package with some feature additions and GPU accelerated graphs. Way better than plotting realtime info / data than the common matplotlib. Side note: This is what pyqtgraph is capable of all in real time, silky smooth: Example, and Example
  • v20 - This is the official python bindings for Oanda's v20 API. Oanda is just one of a few brokers I've written connectivity agents to work with on my algo platform. The others either do not need an external library (they are REST/URL based,) or I've included the library itself within my algo code since I had to customize a few things within it (..looking at you Interactive Brokers.)
I add these last two packages with a Windows batch / BAT script I run after installing Anaconda on a fresh machine. (Saved as a .BAT file and run by just double clicking on it.)

Code:
title Capital Windows / Python Dependencies Install Script

echo Installing common dependencies atop of Anaconda Python on Windows

pip install Dependencies\ujson-1.35-cp27-cp27m-win_amd64.whl

pip install pyqtgraph
pip install v20

pause

You'll notice a ujson whl file that's installed form a local source which is depended upon by pyqtgraph. Python's pip command will try, and fail, to build ujson from source code on a new computer that doesn't have a VS compiler installed already. So I just grabbed a pre-built binary and included it. You can find this file here pre-built for your given platform here: http://www.lfd.uci.edu/~gohlke/pythonlibs/

EDIT: An update to this information can be found here:
https://fxgears.com/index.php?threa...nment-jacks-technology-stack.1090/#post-18905
 
Last edited:
The IDE and Version Control System

emacs over vi

Just kidding! My progression of Python IDE's went as followed:

Spyder > IDLE > Wing IDE > Visual Studio Code (with python extensions)

Honestly, IDLE is good enough for most people to start off with and it comes with the base Python install. For single page scripts and such there really isn't much need to go elsewhere.

However, I started hunting for a more robust IDE once my average project started to span multiple files and required an easier to use debugger. Being able to launch into a debugger and view the contents of an object in memory is critically useful when figuring out why your seemingly intelligent program is acting like a retarded step-child.

I use Visual Studio Code specifically for the following perks:
  • The Python extension allows for debug / runtime, autocomplete, and a lot of useful editor shortcuts.
  • It works really well with git, my desired source / version control.
YN4Linz.png

(Example above of VS:Code detecting changes to files that differ from the master repo on my git server.)

I won't lie.. coming from a Linux background and doing this all on Windows now makes me feel kinda dirty, especially since it's Visual Studio.. but I have to say, VS Code is a solid IDE all around and I'm very pleased with how efficient it has made me as a programmer. I must stress though, I'm talking about Visual Studio Code, NOT Visual Studio Express or Pro; they are entirely different products.

The only downside I can foresee on the Microsoft end is they have a habit of drastically changing software, or killing off products, without much regard for the community using them. An open source IDE would be a lot more stable in this sense, but that is a risk I'm willing to take for now. (And there's always going back to emacs, amirite? lol)

That moves us along to GiT.

If you don't know git, get to know it. This has been one of the most useful tools I've ever used while coding to keep track of changes and to centrally store and control code. Seriously, if you've never considered a virsion control system before, go here and watch: https://git-scm.com/video/what-is-version-control

For my git server, I've used github before and host the odd file on it, but to keep things private I've switched to gitlab.com

For windows clients, I just the git client found here: https://git-scm.com/

For Linux clients, it's usually built into the default install of most distributions.

With git, I'm able to easily:
  • give people access to my code in a controlled environment
  • see changes to the code I've made over time (ever break something and not realize it to a few versions later?)
  • organize branches of code (stable, testing, etc..)
And much more, but the above are my primary uses.
 
Commercial Third Party Software - eSignal 12

  • eSignal - This guy can get expensive with all the data fees for various markets, but I use eSignal to verify live market data (2nd opinion is always good when data feeds issues with your broker's platform come up,) as well as use it for historical data calls that I can trust a bit more than sources like Yahoo Finance. A major plus for eSignal is the ability to be used as a data source for a few 3rd party platforms, of which I also use NT8
Exporting data in eSignal 12 isn't as straight forward as previous versions, so here are the steps:

For most markets, eSignal supports historical minute bar data going back as far as 300 days or more, but to save on bandwidth the default chart only pulls data that would be visible (called dynamic mode.) So first we need to edit the time template of your chart to be a value far back enough in days to capture the desired data available:

jJBlcuN.png


After you've set this, the chart should now pull as far back as eSignal allows.

Next we have to turn the chart into 'Tabular Mode' to have eSignal organize all the chart's data into tables ready for export. Right click on the chart itself and select this feature:

suAe0V1.png


Finally, right click on the new tables (as seen above, as the chart was already switched over to tabular mode,) and got to 'Data Export'. This feature simply isn't available when not in 'Tabular Mode'.

Presto! You now have a csv export of minute bar data on your desired symbol going back 300+ days.

Next up..
 
Last edited:
Commercial Third Party Software - NinjaTrader 8 (+ZeroMQ DLLs)

Why reinvent the wheel? If I'm going to start doing any sort of time series analysis (read: technical indicator based trading signals,) then coding up a huge in-memory matrix of time and price values--and then traversing it with indicator code--is a huge time sink for what most argue doesn't produce much edge. So instead I use NT8 by piping in data from my eSignal account into NT8 charts, then using a custom indicator to send any data I wish (indicator values, strategy signals, etc..) back out over the network to my Python based Algo platform.

Getting NT8 to pipe data to my platform wasn't easy though... but this is where ZeroMQ comes in, as the IPC messaging library works not only intra-python app, but it can be used to bridge any language that supports it, including the C# implementation with NT8.

This was not well documented online, so I had to put together a solution myself and I documented the setup instructions just in case I ever had to go through that hell again... And I mean hell... NT8 doesn't play nicely with all C# libraries, I had to copy over files to various locations in the filesystem because Windows and NT8 doesn't do this by default, all based on the random errors I'd get when trying to import and run the libraries. So to save you the trouble, below are the instructions to get it up and running:

Using ZeroMQ with Ninjatrader 8​
  1. Grab the ZeroMQ C# bindings release from this github page: https://github.com/zeromq/clrzmq4/releases
  2. Take all files from the ./bin/release directory within the archive and copy them to your NT8 documents / bin / custom directory.
  3. Add ZeroMQ.DLL to the NT8 resources by right clicking on a ninjascript editor window, selecting ‘resources…’ and selecting the ZeroMQ.DLL file that you just copied into NT8’s document folder in the step above.
  4. NT8 needs to find a copy of the ZMQ library in a temp directory or system directory, so before use, you gotta take the contents of the i386 folder and place them in your user’s AppData\Local\Temp directory.
  5. NT8 MIGHT complain about not finding the ZMQ library in it’s install directory. So we need to copy the i386 folder to the bin directory within the NT8 install directory for good measure: C:\Program Files (x86)\NinjaTrader 8\bin
  6. In any indicator or strategy using ZMQ, you now need to add a ‘using ZeroMQ;’ line at the top of the script.
Done.​

From this point we can code up things in C# within NT8's environment that can pipe data out over a ZMQ socket to any app on your network or local computer that's listening.

I won't explain the whole process of writing C# and getting a ZMQ socket up, but it's simple enough for anyone who knows a little C# and has used ZMQ before.
 
Last edited:
The Hardware

Despite what some believe, you don't need very beefy hardware to trade or algo trade. A few of my best and most profitable trades have been done on an old netbook with a 10", low resolution, screen. People who invest into expensive hardware, computer setups, etc.. before they even have a prof of concept hammered out (be it a trade plan, or algo strategy,) are putting the 'cart before the horse.'

That said, my humble setup currently includes: (This is obviously subject to change and I most likely won't come back here to update this list... just consider it an idea of the environment as of around the time of posting.)
  • 4th Gen Core i5 HP business class desktop at work, with 4 FHD monitors on a quad monitor stand. Windows 10.
  • 2nd Gen Core i5 HP business class desktop at home, with 2 monitors, one being an ultra wide resolution (2560 x 1080.) Windows 10.
  • 3rd Gen Core i5 X Series Thinkpad. Windows 10.
  • 3x older Core 2 Quad desktops, headless, that are stacked on shelves at my office running code or providing data services to other machines running algo code. Two running Linux (Ubuntu,) one Windows 10.
I do most of my coding and testing on my laptop (being able to unplug and go to a new environment, like a cafe, when stuck on a problem is very nice.) Primary (live) execution on the work PC. Secondary and sim testing on the 3 headless older desktops. And research and backtesting on my home PC.

Right now you can find equivalent business class machines off-lease, on kijiji, or on ebay, for under $800 USD combined total.. and probably a LOT less than that if you're motivated. Add a few cheap monitors and you're set.

My point is, people who drop thousands into a single workstation before they even have an idea of what kind of strategy they plan to code up, are silly.. likely more into the idea of being a trader, than actually being a trader. Technology enhances and helps a trader, it doesn't make a trader... or as I like to say: if you can't make money off just one screen, what will a matrix of monitors help you accomplish? Lose money faster and more efficiently?
 
Last edited:
The Python Based Algo Platform

(Note: I obviously cannot share everything about the platform but I'll try to describe as much as I can.)

So before I begin talking about the platform, I have to take a step back and explain my approach and what lead me to want to start algo trading. I always saw coding algorithms and other trading tools as an excuse to learn how to program, not as something required to trade. I figured if my code amounted to no profit at all, I would at least have some good logic and programming skills to use elsewhere. My results years after I started learning to code for trading have been a lot better than the expectations I had going in, but the time it took to get this far was way longer than I ever imagined.

The platform started out about two years ago while I was developing an algo that made markets. This was one of my early goals to learn programming: to make a program that was event driven, able to do multiple actions at the same time, kept track of fills, had logic to follow on a fill-by-fill basis, and that kept good control of risk. It was a lofty goal because broken down that means I have to also write interfaces for the incoming quote data, write an interface to my broker, have a graphical interface so I could start, stop, and adjust settings, and even handle things when they went wrong (you know, because a fault in the logic shouldn’t result in sending hundreds of duplicate orders out at the same time,) all things I had little experience with (..prior to this, I had written a few command line based trading tools, and created a few EAs for the MT4 platform, but nothing too serious.)

So I hacked away over month or two, and built out a standalone trading app that made markets with ample settings and user defined variables to set.

I had already used python and ZeroMQ to make other apps in the past, so for the stand alone app I used ZeroMQ to bind together pretty much everything. I designed it as multiple smaller programs all working concurrently behind a single graphical interface. There was a program that filtered incoming data, one that sent order out for execution, one that monitored risk..etc.. I felt quite proud of the stand alone market making app, even if it was quite basic from a trading point of view.

Sadly, this first app had one flaw: I could only run one instance of it at a time, so I could only ever algo trade one symbol at a time on a single broker. This was a huge issue. My goal included running a market making enterprise across hundreds of symbols at the same time, like the big boys on wall street operate.. so I had to rebuild.

I hate rewriting code, rebuilding the same app over and over, so before I set out to make the platform handle multiple symbols at once I brainstormed what other major changes I might want to do down the road. This way I could build the foundations needed to facilitate the future ideas. I can recall my key points were:
  • It had to be flexible and each symbol I wanted to trade should be able to be dynamically added or removed during run time.
  • It had to work on multiple types of securities, not just stocks like I had originally envisioned.
  • It had to work with multiple incoming data feeds at the same time.
  • It had to work with multiple brokers at the same time. (You might suspect these two lines relate to latency arb between a fast and slow FX broker, but this had more to do with making the platform broker agnostic.. meaning I wanted to write a strategy once and connect it to any broker I desired instead of being limited by a given broker’s platform or coding environment. In other words: write an MQL4 EA and it only runs on MT4, but write an algo on my platform and it will run seamlessly everywhere. This also ensures that my trade logic is not exposed through the broker's own proprietary platform.)
  • It had to communicate to agents or other app running across a network, that way I can offload some of the trade logic and signal generation workload to more powerful computers when the time comes.
  • Finally, if the platform is going to be this robust, it has to also be able to do many other types of strategies, not just the original market making app. (That last item was key. I never thought the market making app would be a winner; it was just an excuse to start coding. The big boys of wall street would, in my mind, be faster and smarter at market making than I would ever be.. not to mention they’d have a huge infrastructure advantage as well.)
So with a new set of goals outlined I set out to build my platform. Each planned out code block, each objective, each task, was all written down on post-it notes and put up on my home office’s wall. I then divided the wall into three columns: “TODO”, “In Progress”, and “Finished”. As I worked through components, I moved their respective post-it note through the columns. Later I picked up Git as my source code reversion manager, and eventually I migrated my post-it wall to text files also controlled by Git.

I could reuse some code from the original market making app, but a lot had to be rewritten to be more modular. Plus, one of the downsides of learning to code as I go means over time I end up finding better and more efficient ways of coding up some stuff I've already completed. This has often sent me back to rework parts of my platform that were previously in the “Finished” column on my wall of post-it notes. Candidly, I also had to admit to myself when I was redoing old code to procrastinate finishing something useful, as every job worth doing isn’t necessarily worth doing well.

Another hurdle was motivation. Ultimately this whole project was based on a strategy (market making) that I didn’t have faith in actually being profitable. While the process of creating this platform was great goal for trying to learn how to code better, I knew when I hit roadblocks and setbacks I’d need something else to keep me pushing forward. To help with this, I engaged with a few trader friends about ideas they had that might benefit from automation and if they'd be willing to work with me on them. I also started digging up my old ideas that I had previously automated on MT4, and brainstorming new ideas that could be automated.

Regarding the 'ideas from trader friends', I quickly realized that plenty of people had ideas for trading algos, but the quality of such ideas left a lot to be desired. (Not that I could complain, I was seeking out other people’s input after all.) I started to preface the offer to code up ideas for people with the following:

Algos are good for when you don’t have enough hands, eyes, or aren’t fast enough to do something manually. If you already can do something efficiently on your own, then turning it into an algo will only make performance worse because I can’t encapsulate your years of experience into lines of code. Also, if you ask me to code up a moving average crossover strategy, we aren’t friends anymore.

And this helped filter most requests and left me with more interesting concepts.

The result of all that post-it note planning and rework is this, introducing "Capital":
OmDecmh.png


"She doesn't look like much, but she's got it where it count's kid."

Strategies are added with a click of the plus button, and appear as a “row” within the main application’s interface.

Gnq5zlr.png


I’m then able to select a given strategy (under the method column) and configure the settings of that strategy by pressing the settings button (which reads the settings string and dynamically generates a popup based on the values in the string.)

Ro6uyi6.png


Once a strategy is configured for a symbol, or multiple symbols and strategies, I built in a template saving and loading system that saves all the details (symbol, broker in use, data feed in use, strategy, settings of the strategy, etc..) in XML format so it can be easily loaded again in the future. The circled plus sign and SD card icons represent this functionality. Forgive my misuse of google's material design icons, but as I said earlier, not every job worth doing is worth doing well.

Speaking of the icons, you'll notice a 'refresh' icon just after the SD Card icon. This is another feature I'm proud of; it allows me to dynamically reload only the strategy files without restarting the application. That way I can code a strategy, edit, fix, adjust, etc.. and just pull the new strategy code into the platform on the fly while older iterations of the same strategy, and others, are already running. I do this via direct importing into the global dictionary and I'll detail it in another post sometime soon.

mdp23QA.png


Broker and data feed connections can be managed dynamically as well, with a menu that pops up after clicking the icon second to last on the right. This lets me enable, configure, and disable connections as required during operation.

wdjDtOQ.png


As mentioned before, all components are interconnected with ZeroMQ. This is both between processes and over the network:

You'll notice in the image above that I detail different IP addresses for hosts. ZeroMQ allows for TCP based socket connections so having incoming signals from another computer is rather trivial.

All data feed agents connect with their respective platforms and translate the incoming quote feed into a common format that my platform will understand. The data is then forwarded along into what I call a “data funnel.” Basically, it’s a one-to-many repeater that subscribes to all data agents and then allows running strategies to subscribe to a single publisher and get only the info that matters to them. This was a simple design that makes great use of the publisher / subscriber programming pattern, and adding remote data sources was as simple as adding a bridge / proxy.

All brokers interface with strategies via a middle man named “BoB” (Broker-of-Brokers.) BoB doesn’t do much outside of making sure the right orders and execution requests go to the right active brokers. This allows strategies to only have to know about BoB, and not have to be custom coded for each broker I want to connect to. BoB gets an order execution request, sees which broker it's intended to, and passes it along to the right agent.

Strategies are activated as their own thread. They listen not only for trading data, but for commands by the parent application, such as to shut down, or pause actions until further notice.

Strategy files are stripped right down to just the business logic and rely heavily on an imported strategy class. This strategy class has all the common operations pre-defined, and also defines how strategies locally store data and order info it subscribed to.

A strategy based on the strategy class is also event driven. As a quote update comes in and is listened to by a given strategy, it first stores the data then invokes code specifically set to run upon quote updates. The events that can be called are:
  • Quotes (level 1)
  • Depth (level 2)
  • Ticks (time of sales, or last)
  • Orders (order status changes)
  • Indicators (external data values outside of price)
  • Time (optional time delayed loop that gets called only if the strategy has code written for it..)
The strategy class also has methods defined to quickly pull up all contextual info about the symbol it is trading, and things like market time, rounding functions, yahoo finance historical look-ups, and many other blocks of helpful code. I really tried to make sure that when I write a strategy file that I only have to write the strategy logic itself.

This almost catches us up to where I’m at today. The funny part is, even to this day I don’t feel the platform is 100% finished. I mean, it’s production ready and currently trades live doing more volume algorithmically than I do manually (by an insane margin,) but there’s always a growing list of features I want to add and things I want to improve upon.

I can’t go into depth on the strategies themselves, or the code behind them, but this was a general overview of the platform and my approach to creating it.

I hope some people who are going down this road find it helpful.
 
Last edited:
Hi Jack, thanks for the post. V. Interesting. Re Ninja - is it possible to use ZeroMQ (after downloading and paste / reference in bin / NT script reference) as above... to actually pipe the data - so e.last, e.volume form NT etc directly into a python script or pandas dataframe... (append).
If not - do you know of any c# libs or py libs that could do this. ?
Thanks for any advice in advance, JM
 
Hi Jack, thanks for the post. V. Interesting. Re Ninja - is it possible to use ZeroMQ (after downloading and paste / reference in bin / NT script reference) as above... to actually pipe the data - so e.last, e.volume form NT etc directly into a python script or pandas dataframe... (append).
If not - do you know of any c# libs or py libs that could do this. ?
Thanks for any advice in advance, JM

Yes, possible. Any value that you can work with inside NT8/C# can be piped out through ZMQ to another app.
 
Sigh... the overwhelming majority of trades are conducted electronically or otherwise via algos these days. It's not a matter of being better or worse, just different.

I'm talking about my tech stack in this thread, not debating the merits of algo trading.

If you feel that your manual analysis is solid, please contribute to the forums and start a thread about your approach and the strengths it has over other forms of trading. Thanks!
 
The Language

Moving forward, Python from Python's main website (http://python.org) comes with a bunch of really useful built in libraries, some that I employ daily are:
  • threading - for concurrency
  • logging - for keeping track of what a large, threaded, multi-service, multi-client application is doing. Outputting to console and log files. Custom adapters for formatting. Everything is tracked so finding bugs in code / logic is easy later on.
  • xml - for reading and writing settings that my programs save based on user input.. JSON would be another alternative to this.
  • datetime - I work with a lot of time series data (transaction feeds, quote feeds, etc..) so that requires converting a time string (ie: "14:22:12.436" into a binary format that a computer can easily compare to other points in time
Of course there's many other common libraries included with Python that are worth mentioning, but I'm trying to keep this concise.

Not all essential libraries are included with the official distribution of Python though.. and for that reason, I use a third party's Python distribution that bundles a lot of the common add-ons I'd otherwise have to download separately:


The extra Python Libraries I use extensively, that do not come with the official Python install, but are bundled with Anaconda's distribution are:
  • numpy - Fast array processing for data, vectors calcs, etc..
  • pandas - table based data processing, organization, and display. I use this mostly analysis and data display since it has a great Excel output module.
  • scikit-learn - I'm still just scratching the surface of machine learning but this package was a great place to start. More on this later.
  • zmq - ZeroMQ - IPC messaging between processes / threads, and between computers on the network.
  • PyQT4 - for graphical interfaces. More on this later.
In addition to the vast library that's included with Anaconda's Python distribution, there are a few remaining misc packages I add manually:
  • pyqtgraph - extending the PyQT4 package with some feature additions and GPU accelerated graphs. Way better than plotting realtime info / data than the common matplotlib. Side note: This is what pyqtgraph is capable of all in real time, silky smooth: Example, and Example
  • v20 - This is the official python bindings for Oanda's v20 API. Oanda is just one of a few brokers I've written connectivity agents to work with on my algo platform. The others either do not need an external library (they are REST/URL based,) or I've included the library itself within my algo code since I had to customize a few things within it (..looking at you Interactive Brokers.)
I add these last two packages with a Windows batch / BAT script I run after installing Anaconda on a fresh machine. (Saved as a .BAT file and run by just double clicking on it.)

Code:
title Capital Windows / Python Dependencies Install Script

echo Installing common dependencies atop of Anaconda Python on Windows

pip install Dependencies\ujson-1.35-cp27-cp27m-win_amd64.whl

pip install pyqtgraph
pip install v20

pause

You'll notice a ujson whl file that's installed form a local source which is depended upon by pyqtgraph. Python's pip command will try, and fail, to build ujson from source code on a new computer that doesn't have a VS compiler installed already. So I just grabbed a pre-built binary and included it. You can find this file here pre-built for your given platform here: http://www.lfd.uci.edu/~gohlke/pythonlibs/


To update this post, I'm now using Python 3 (and have ported over all my code from Python 2 to Python 3) directly from https://python.org/downloads/

I went through the pain of porting everything over to make things compatible with a few third party libraries I was interested in using (as well as the packages available for Python 3 have really caught up to Python 2 in the past two years.)

I've also upgraded from PyQt4 to PyQt5.. I'll be experimenting with presenting data as a custom widget via Qt5 instead of using the built-in tables and such like before. I'll detail more about this later if I get it working right but the idea would be to use images/bitmaps in memory to render parts of the interface kinda like how a game renders characters/environments.

Also, since I moved away from Anaconda and onto vanilla Python.org's distribution, I've extended the dependency install script to include a lot more packages. Overall this is still way smaller and faster to install/configure on a new box than using Anaconda (not to mention easier to keep up to date.)
 
Commercial Third Party Software - MetaTrader 5 (+ZeroMQ DLLs)

Again, why reinvent the wheel? Not only can MT5 do a good job at time series analysis (just as I mentioned Ninjatrader 8 can in a previous post,) but building a broker and datafeed interface to MetaTrader 5 allows me to leverage many broker's products who are offering exchange access through MT5.

So I created one MT5 Expert Adviser and two indicators that interface with my python platform.

To get MT5 connected, I used the ZeroMQ library found here:
https://github.com/dingmaotu/mql-zmq
And ran two types of connections:
  • The Expert sits as a listener for commands (like to request streaming data, or to place/cancel a trade,) and this is just a standard REP/REQ socket via ZeroMQ.
  • And two indicators are loaded dynamically as needed to both report order status as well as send quotes. These are created by the expert (via iCustom call) and the indicators themselves just run a PUB socket that authenticates and publishes info back to my platform's data distributor (to which other processes are subscribed.)
It's actually quite simple and works well. Right now I'm not executing much on MT5 because there isn't much of a competitive reason for me to do so broker wise (better pricing / products / etc.. available through non-metatrader brokers,) but using the data provided by some brokers in real time is very useful.
 
Misc. IEX Tools

This isn't a huge part of the application, but for quick reference and as an excuse to learn IEX's API, I put together a few tools that lean on the free data IEX provides:

https://iextrading.com/developer/

mZ7cXcp.png


I'm not making any algo trades based off IEX data, but the earnings calendar (for example) saves me from having to dig through Nasdaq Trader's clunky website and pulls data as fast as I can launch it.

I mostly just stuff the data into a few python dictionaries or lists (or lists of dictionaries,) then feed it into a table to display to the user as such:

gqHRLsW.png


Again, not a huge deal or anything complex, just an excuse to learn what IEX can provide and how to use their system.
 
Side note:

Darwinex just posted a series of videos on using ZeroMQ with MT4. It's very detailed and Darwinex has really polished off their ZeroMQ to MT4 code over the last few years. This interface will let people do what I'm doing here; run the business logic of their code externally to MT4.

More info here on MT4 and ZeroMQ from Darwinex

And video series on YouTube here:


(I'm not using this code myself, as I built mine for MT5.. but I did read through their code for MT4 back when I wrote my own to get an idea of how they approached the problem. It's a great reference and good place to start.)
 
Jack

Thanks once again for an informative article.
I still only code in MQL, all on H1, so execution speed not a significant issue

I have yet to delve into the depths of Python, although will require it, or a competent coder, for some odoo apps.

Could you detail the advantages to using Python over MQL.
 
FTMO Trader Scouting
Back
Top