Can not find answer? Please reach out to our customer support team.
An instrument is a tradable asset. Instrument widely traded in crypto finance are spot and derivatives, and the latter is subdivided into futures, swap (perps) and options.
A symbol is the name (alias) of an instrument traded on an exchange.
For spot, the name is usually a concatenation of the symbol currency and the symbol currency, e.g. (BTC/USD or BTCUSD).
For derivatives, the symbol contains information about the instrument traded (e.g. BTC-PERP).
A Dataset is a type of exchange data that is relevant to different analysis tasks. Depending on tasks, some dataset are more suitable than others.
We support:
Record of every execution (trade). We also record informative data such as updates of mark price, index price, liquidations, implied value, etc. when relevant and when the exchange provides it.
Aggregated, lower frequency data for an high level view of the market
(aka L2 orderbook data). The most granular type of data supported on most exchanges. All book level updates in bid and ask sides are recorded. Orderbook snapshots can be produced from this data as each update is replayed.
Already reconstructed orderbook snapshots, to save you the hassle. Several options are provided: number of levels required, update frequency.
Deepmarker credits are used to pay for the consumption of resources on Deepmarker platform. A Deepmarker credit is a unit of measure, and is consumed only when your order a data product, or you are running a data pre-processing. Credits does not expire and can be used on all Deepmarker products.
Order book updates (aka L2 orderbook data) is the most granular type of orderbook data. It contains updates of book levels: each time a price level in the book changes (has its quantity updated, or is deleted), an update is recorded. Therefore, on each new update on any side (bid, ask) for a given price, volume (or deletion) is recorded as a row in the file.
Orderbook snapshots provide bid and ask quantities for each price in a set of rows, either:
At order time, you can select if you want snapshots to happen at each update or every N seconds, and also the number of levels you need. This has a big impact on the volume of data that will be produced, as frequent snapshots with a lot of levels will produce very big files.
Orderbook updates are the most granular orderbook data available, but it requires processing in order to be useful in a backtesting process.
Our snapshots are produced from updates, and while this process should be bug-free, using updates provides more security, and is also more compact in terms of space compared to snapshots.
The drawback is that your backtesting process must comprise a reconstruction step most of the times in order to do something useful with this data.
However, if your backtesting process involves querying the orderbook at each time, you might prefer using already reconstructed snapshots to save you the hassle of doing it yourself, and avoid potential issues or bug in this process.
When ordering an orderbook snapshots dataset, you can select how many levels do you want in the dataset. Each snapshot will contain at most N levels of each side of the orderbook.
When ordering an orderbook snapshots dataset, you can select the frequency of each snapshot, i.e. how often a snapshot of the orderbook must be recorded. It can be either immediate (each time one of the N levels of the book is updated, or deleted), or at every interval of time (e.g every 5 seconds), at your convenience.
Quote is quite an overloaded term in finance and can mean many things. However in the context of crypto data, it usually refers to top-of-book levels (i.e. best bid and best ask), usually with associated quantities available, recorded every time the best bid/ask price changes (or quantities available for them).
Any company or individual who wants to explore in depth what happens on crypto exchanges can make use of this data. Typically proprietary trading firms, hedge funds, indie quants, research institutions, and so on.
Backtesting is the process of running a trading strategy (or algorithm) on past historical data to assess past performance on various datasets. If acceptable performance is observed, the strategy must be worth worked on and eventually traded live on exchanges.