AAT

AsyncAlgoTrading

https://dev.azure.com/tpaine154/aat/_apis/build/status/AsyncAlgoTrading.aat?branchName=masterBuild Status https://img.shields.io/azure-devops/coverage/tpaine154/aat/19/masterCoverage https://img.shields.io/github/license/timkpaine/aat.svgLicense https://img.shields.io/pypi/v/aat.svgPyPI https://img.shields.io/readthedocs/aat.svgDocs

aat is a framework for writing algorithmic trading strategies in python. It is designed to be modular and extensible, and is the core engine powering AlgoCoin.

It comes with support for live trading across (and between) multiple exchanges, fully integrated backtesting support, slippage and transaction cost modeling, and robust reporting and risk mitigation through manual and programatic algorithm controls.

Like Zipline, the inspiration for this system, aat exposes a single strategy class which is utilized for both live trading and backtesting. The strategy class is simple enough to write and test algorithms quickly, but extensible enough to allow for complex slippage and transaction cost modeling, as well as mid- and post- trade analysis.

Overview

aat is composed of 4 major parts.

  • trading engine
  • risk management engine
  • execution engine
  • backtest engine

Trading Engine

The trading engine initializes all exchanges and strategies, then martials data, trade requests, and trade responses between the strategy, risk, execution, and exchange objects, while keeping track of high-level statistics on the system

Risk Management Engine

The risk management engine enforces trading limits, making sure that stategies are limited to certain risk profiles. It can modify or remove trade requests prior to execution depending on user preferences and outstanding positions and orders.

Execution engine

The execution engine is a simple passthrough to the underlying exchanges. It provides a unified interface for creating various types of orders.

Backtest engine

The backtest engine provides the ability to run the same stragegy offline against historical data.

Trading Strategy

The core element of aat is the trading strategy interface. It includes both data processing and order management functionality. Users subclass this class in order to implement their strategies

Class

class Strategy(metaclass=ABCMeta):
    @abstractmethod
    def onTrade(self, event: Event):
        '''Called whenever a `Trade` event is received'''

    def onOpen(self, event: Event):
        '''Called whenever an Order `Open` event is received'''

    def onFill(self, event: Event):
        '''Called whenever an Order `Fill` event is received'''

    def onCancel(self, event: Event):
        '''Called whenever an Order `Cancel` event is received'''

    def onChange(self, event: Event):
        '''Called whenever an Order `Change` event is received'''

    def onError(self, event: Event):
        '''Called whenever an internal error occurs'''

    def onStart(self):
        '''Called once at engine initialization time'''
        pass

    def onExit(self):
        '''Called once at engine exit time'''
        pass

    def onHalt(self, data):
        '''Called whenever an exchange `Halt` event is received, i.e. an event to stop trading'''
        pass

    def onContinue(self, data):
        '''Called whenever an exchange `Continue` event is received, i.e. an event to continue trading'''
        pass

    def onAnalyze(self, engine):
        '''Called once after engine exit to analyze the results of a backtest'''
        pass

    @abstractmethod
    def requestBuy(self,
                   callback: Callback,
                   data: MarketData):
        '''requestBuy'''

    @abstractmethod
    def requestSell(self,
                    callback: Callback,
                    data: MarketData):
        '''requestSell'''

Example Strategy

Here is a simple trading strategy that buys once and holds.

from aat.strategy import TradingStrategy
from aat.structs import MarketData, TradeRequest, TradeResponse
from aat.enums import Side, OrderType
from aat.logging import STRAT as slog, ERROR as elog

class BuyAndHoldStrategy(TradingStrategy):
    def __init__(self) -> None:
        super(BuyAndHoldStrategy, self).__init__()
        self.bought = None

    def onFill(self, res: TradeResponse) -> None:
        self.bought = res
        log.info('d->g:bought %.2f @ %.2f' % (res.volume, res.price))

    def onTrade(self, data: MarketData) -> bool:
        if self.bought is None:
            req = TradeRequest(side=Side.BUY,
                               volume=1,
                               instrument=data.instrument,
                               order_type=OrderType.MARKET,
                               exchange=data.exchange,
                               price=data.price,
                               time=data.time)
            log.info("requesting buy : %s", req)
            self.requestBuy(req)
            self.bought = 'pending'
    def onError(self, e) -> None:
        elog.critical(e)

    def onChange(self, data: MarketData) -> None:
        pass

    def onCancel(self, data: MarketData) -> None:
        pass

    def onOpen(self, data: MarketData) -> None:
        pass

Trading strategies have only one required method handling messages:

  • onTrade: Called when a trade occurs

There are other optional callbacks for more granular processing:

  • onOpen: Called when a new order occurs
  • onFill: Called when a strategy’s trade executes
  • onCancel: Called when an order is cancelled
  • onChange: Called when an order is modified
  • onError: Called when a system error occurs
  • onHalt: Called when trading is halted
  • onContinue: Called when trading continues
  • onStart: Called when the program starts
  • onExit: Called when the program shuts down

There are also several optional callbacks for backtesting:

  • slippage
  • transactionCost
  • onAnalyze
    • called after trading engine has processed all data, used to visualize algorithm performance

Setting up and running

An instance of TradingStrategy class is able to run live or against a set of historical trade/quote data. When instantiating a TradingEngine object, you can set a type attribute to be one of:

  • live - live trading against the exchange
  • simulation - live trading against the exchange, but with order entry disabled
  • sandbox - live trading against the exchange’s sandbox instance
  • backtest - offline trading against historical OHLCV data

To test our strategy in any mode, we will need to setup exchange keys to get historical data, stream market data, and make new orders.

API Keys

You should creat API keys for exchanges you wish to trade on. For this example, we will assume a Coinbase Pro account with trading enabled. I usually put my keys in a set of shell scripts that are gitignored, so I don’t post anything by accident. My scripts look something like:

export COINBASE_API_KEY=...
export COINBASE_API_SECRET=...
export COINBASE_API_PASS=...

Prior to running, I source the keys I need.

Sandboxes

Currently only the Gemini sandbox is supported, the other exchanges have discontinued theirs. To run in sandbox, set TradingEngine.type to Sandbox.

Live Trading

When you want to run live, set TradingEngine.type to Live. You will want to become familiar with the risk and execution engines, as these control things like max drawdown, max risk accrual, execution eagerness, etc.

Simulation Trading

When you want to run an algorithm live, but don’t yet trust that it can make money, set TradingEngine.type to simulation. This will let it run against real money, but disallow order entry. You can then set things like slippage and transaction costs as you would in a backtest.

Testing

Because there are a variety of options, a config file is generally the most usable interface for configuration. Here is an example configuration for backtesting the Buy-and-hold strategy above on CoinbasePro:

> cat backtest.cfg
[general]
verbose=1
print=0
TradingType=backtest

[exchange]
exchanges=coinbase
currency_pairs=BTC/USD

[strategy]
strategies =
    aat.strategies.buy_and_hold.BuyAndHoldStrategy

[risk]
max_drawdown = 100.0
max_risk = 100.0
total_funds = 10.0

Analyzing an algorithm

We can run the above config by running:

python3 -m aat ./backtest.cfg

We should see the following output:

2019-06-01 17:58:40,173 INFO -- MainProcess utils.py:247 -- running in verbose mode!
2019-06-01 17:58:40,174 CRITICAL -- MainProcess parser.py:165 --
2019-06-01 17:58:40,174 CRITICAL -- MainProcess parser.py:166 -- Backtesting
2019-06-01 17:58:40,174 CRITICAL -- MainProcess parser.py:167 --
2019-06-01 17:58:40,176 CRITICAL -- MainProcess trading.py:106 -- Registering strategy: <class 'aat.strategies.buy_and_hold.BuyAndHoldStrategy'>
2019-06-01 17:58:40,177 INFO -- MainProcess backtest.py:25 -- Starting....
2019-06-01 17:58:41,338 INFO -- MainProcess buy_and_hold.py:28 -- requesting buy : <BTC/USD-Side.BUY:1.0@8567.06-OrderType.MARKET-ExchangeType.COINBASE>
2019-06-01 17:58:41,339 INFO -- MainProcess risk.py:59 -- Requesting 1.000000 @ 8567.060000
2019-06-01 17:58:41,339 INFO -- MainProcess risk.py:80 -- Risk check passed for partial order: <BTC/USD-Side.BUY:1.0@8567.06-OrderType.MARKET-ExchangeType.COINBASE>
2019-06-01 17:58:41,339 INFO -- MainProcess trading.py:244 -- Risk check passed
2019-06-01 17:58:41,339 INFO -- MainProcess trading.py:292 -- Slippage BT- <BTC/USD-Side.BUY:1.0@8567.916706-TradeResult.FILLED-ExchangeType.COINBASE>
2019-06-01 17:58:41,340 INFO -- MainProcess trading.py:295 -- TXN cost BT- <BTC/USD-Side.BUY:1.0@8589.336497765-TradeResult.FILLED-ExchangeType.COINBASE>
2019-06-01 17:58:41,340 INFO -- MainProcess buy_and_hold.py:14 -- d->g:bought 1.00 @ 8589.34
2019-06-01 17:58:41,340 INFO -- MainProcess backtest.py:42 -- <BTC/USD-1.29050038@8567.06-TickType.TRADE-ExchangeType.COINBASE>
...
2019-06-01 17:58:41,474 INFO -- MainProcess backtest.py:42 -- <BTC/USD-2.35773043@8595.0-TickType.TRADE-ExchangeType.COINBASE>
2019-06-01 17:58:41,474 INFO -- MainProcess backtest.py:33 -- Backtest done, running analysis.

This will call our onAnalyze function, which in this case is implemented to plot some performance characteristics with matplotlib.

        import pandas
        import numpy as np
        import matplotlib, matplotlib.pyplot as plt
        import seaborn as sns
        matplotlib.rc('font', **{'size': 5})

        # extract data from trading engine
        portfolio_value = engine.portfolio_value()
        requests = engine.query().query_tradereqs()
        responses = engine.query().query_traderesps()
        trades = pandas.DataFrame([{'time': x.time, 'price': x.price} for x in engine.query().query_trades(instrument=requests[0].instrument, page=None)])
        trades.set_index(['time'], inplace=True)

        # format into pandas
        pd = pandas.DataFrame(portfolio_value, columns=['time', 'value', 'pnl'])
        pd.set_index(['time'], inplace=True)

        # setup charting
        sns.set_style('darkgrid')
        fig = plt.figure()
        ax1 = fig.add_subplot(311)
        ax2 = fig.add_subplot(312)
        ax3 = fig.add_subplot(313)

        # plot algo performance
        pd.plot(ax=ax1, y=['value'], legend=False, fontsize=5, rot=0)

        # plot up/down chart
        pd['pos'] = pd['pnl']
        pd['neg'] = pd['pnl']
        pd['pos'][pd['pos'] <= 0] = np.nan
        pd['neg'][pd['neg'] > 0] = np.nan
        pd.plot(ax=ax2, y=['pos', 'neg'], kind='area', stacked=False, color=['green', 'red'], legend=False, linewidth=0, fontsize=5, rot=0)

        # annotate with key data
        ax1.set_title('Performance')
        ax1.set_ylabel('Portfolio value($)')
        for xy in [portfolio_value[0][:2]] + [portfolio_value[-1][:2]]:
            ax1.annotate('$%s' % xy[1], xy=xy, textcoords='data')
            ax3.annotate('$%s' % xy[1], xy=xy, textcoords='data')

        # plot trade intent/trade action
        ax3.set_ylabel('Intent/Action')
        ax3.set_xlabel('Date')

        ax3.plot(trades)
        ax3.plot([x.time for x in requests if x.side == Side.BUY],
                 [x.price for x in requests if x.side == Side.BUY],
                 '2', color='y')
        ax3.plot([x.time for x in requests if x.side == Side.SELL],
                 [x.price for x in requests if x.side == Side.SELL],
                 '1', color='y')
        ax3.plot([x.time for x in responses if x.side == Side.BUY],  # FIXME
                 [x.price for x in responses if x.side == Side.BUY],
                 '^', color='g')
        ax3.plot([x.time for x in responses if x.side == Side.SELL],  # FIXME
                 [x.price for x in responses if x.side == Side.SELL],
                 'v', color='r')

        # set same limits
        y_bot, y_top = ax1.get_ylim()
        x_bot, x_top = ax1.get_xlim()
        ax3.set_ylim(y_bot, y_top)
        ax1.set_xlim(x_bot, x_top)
        ax2.set_xlim(x_bot, x_top)
        ax3.set_xlim(x_bot, x_top)
        dif = (x_top-x_bot)*.01
        ax1.set_xlim(x_bot-dif, x_top+dif)
        ax2.set_xlim(x_bot-dif, x_top+dif)
        ax3.set_xlim(x_bot-dif, x_top+dif)
        plt.show()

_images/bt.png

We can see that our algorithm also implemented slippage and transactionCost, resulting in a worse execution price:

    def slippage(self, resp: TradeResponse) -> TradeResponse:
        slippage = resp.price * .0001  # .01% price impact
        if resp.side == Side.BUY:
            # price moves against (up)
            resp.slippage = slippage
            resp.price += slippage
        else:
            # price moves against (down)
            resp.slippage = -slippage
            resp.price -= slippage
        return resp

    def transactionCost(self, resp: TradeResponse) -> TradeResponse:
        txncost = resp.price * resp.volume * .0025  # gdax is 0.0025 max fee
        if resp.side == Side.BUY:
            # price moves against (up)
            resp.transaction_cost = txncost
            resp.price += txncost
        else:
            # price moves against (down)
            resp.transaction_cost = -txncost
            resp.price -= txncost
        return resp

Extending

Apart from writing new strategies, this library can be extended by adding new exchanges. These are pretty simple. For cryptocurrency exchanges, I rely heavily on ccxt, asyncio, and websocket libraries.

Example

Here is the coinbase exchange. Most of the code is to manage different websocket subscription options, and to convert between aat, ccxt and exchange-specific formatting of things like symbols, order types, etc.

class CoinbaseExchange(Exchange):

Core Elements

TradingEngine

Data Classes

Data

Event

Instrument and Trade Models

Instrument

Trade

Exchange

Orderbook

We implement a full limit-order book, supporting the following order types:

Market

Executes the entire volume. If price specified, will execute (price*volume) worth (e.g. relies on total price, not volume)

Limit:

Either puts the order on the book, or crosses spread triggering a trade. By default puts remainder of unexecuted volume on book.

Stop-Market

When trade prices cross the target price, triggers a market order.

Stop-Limit

When trade prices cross the target price, triggers a limit order.

Flags

We support a number of order flags for Market and Limit orders:

  • No Flag: default behavior for the given order type
  • Fill-Or-Kill:
    • Market Order: entire order must fill against current book, otherwise nothing fills
    • Limit Order: entire order must fill against current book, otherwise nothing fills and order cancelled
  • All-Or-None:
    • Market Order: entire order must fill against 1 order on the book, otherwise nothing fills
    • Limit Order: entire order must fill against 1 order, otherwise nothing filled and order cancelled
  • Immediate-Or-Cancel:
    • Market Order: same as fill or kill
    • Limit Order: whenever this order executes, fill whatever fills and cancel remaining

API Documentation

Logging package for Python. Based on PEP 282 and comments thereto in comp.lang.python.

Copyright (C) 2001-2017 Vinay Sajip. All Rights Reserved.

To use, simply ‘import logging’ and log away!

class logging.BufferingFormatter(linefmt=None)[source]

Bases: object

A formatter suitable for formatting a number of records.

format(records)[source]

Format the specified records and return the result as a string.

formatFooter(records)[source]

Return the footer string for the specified records.

formatHeader(records)[source]

Return the header string for the specified records.

class logging.FileHandler(filename, mode='a', encoding=None, delay=False)[source]

Bases: logging.StreamHandler

A handler class which writes formatted logging records to disk files.

close()[source]

Closes the stream.

emit(record)[source]

Emit a record.

If the stream was not opened because ‘delay’ was specified in the constructor, open it before calling the superclass’s emit.

class logging.Filter(name='')[source]

Bases: object

Filter instances are used to perform arbitrary filtering of LogRecords.

Loggers and Handlers can optionally use Filter instances to filter records as desired. The base filter class only allows events which are below a certain point in the logger hierarchy. For example, a filter initialized with “A.B” will allow events logged by loggers “A.B”, “A.B.C”, “A.B.C.D”, “A.B.D” etc. but not “A.BB”, “B.A.B” etc. If initialized with the empty string, all events are passed.

filter(record)[source]

Determine if the specified record is to be logged.

Is the specified record to be logged? Returns 0 for no, nonzero for yes. If deemed appropriate, the record may be modified in-place.

class logging.Formatter(fmt=None, datefmt=None, style='%')[source]

Bases: object

Formatter instances are used to convert a LogRecord to text.

Formatters need to know how a LogRecord is constructed. They are responsible for converting a LogRecord to (usually) a string which can be interpreted by either a human or an external system. The base Formatter allows a formatting string to be specified. If none is supplied, the the style-dependent default value, “%(message)s”, “{message}”, or “${message}”, is used.

The Formatter can be initialized with a format string which makes use of knowledge of the LogRecord attributes - e.g. the default value mentioned above makes use of the fact that the user’s message and arguments are pre- formatted into a LogRecord’s message attribute. Currently, the useful attributes in a LogRecord are described by:

%(name)s Name of the logger (logging channel) %(levelno)s Numeric logging level for the message (DEBUG, INFO,

WARNING, ERROR, CRITICAL)
%(levelname)s Text logging level for the message (“DEBUG”, “INFO”,
“WARNING”, “ERROR”, “CRITICAL”)
%(pathname)s Full pathname of the source file where the logging
call was issued (if available)

%(filename)s Filename portion of pathname %(module)s Module (name portion of filename) %(lineno)d Source line number where the logging call was issued

(if available)

%(funcName)s Function name %(created)f Time when the LogRecord was created (time.time()

return value)

%(asctime)s Textual time when the LogRecord was created %(msecs)d Millisecond portion of the creation time %(relativeCreated)d Time in milliseconds when the LogRecord was created,

relative to the time the logging module was loaded (typically at application startup time)

%(thread)d Thread ID (if available) %(threadName)s Thread name (if available) %(process)d Process ID (if available) %(message)s The result of record.getMessage(), computed just as

the record is emitted
converter()
localtime([seconds]) -> (tm_year,tm_mon,tm_mday,tm_hour,tm_min,
tm_sec,tm_wday,tm_yday,tm_isdst)

Convert seconds since the Epoch to a time tuple expressing local time. When ‘seconds’ is not passed in, convert the current time instead.

default_msec_format = '%s,%03d'
default_time_format = '%Y-%m-%d %H:%M:%S'
format(record)[source]

Format the specified record as text.

The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.

formatException(ei)[source]

Format and return the specified exception information as a string.

This default implementation just uses traceback.print_exception()

formatMessage(record)[source]
formatStack(stack_info)[source]

This method is provided as an extension point for specialized formatting of stack information.

The input data is a string as returned from a call to traceback.print_stack(), but with the last trailing newline removed.

The base implementation just returns the value passed in.

formatTime(record, datefmt=None)[source]

Return the creation time of the specified LogRecord as formatted text.

This method should be called from format() by a formatter which wants to make use of a formatted time. This method can be overridden in formatters to provide for any specific requirement, but the basic behaviour is as follows: if datefmt (a string) is specified, it is used with time.strftime() to format the creation time of the record. Otherwise, an ISO8601-like (or RFC 3339-like) format is used. The resulting string is returned. This function uses a user-configurable function to convert the creation time to a tuple. By default, time.localtime() is used; to change this for a particular formatter instance, set the ‘converter’ attribute to a function with the same signature as time.localtime() or time.gmtime(). To change it for all formatters, for example if you want all logging times to be shown in GMT, set the ‘converter’ attribute in the Formatter class.

usesTime()[source]

Check if the format uses the creation time of the record.

class logging.Handler(level=0)[source]

Bases: logging.Filterer

Handler instances dispatch logging events to specific destinations.

The base handler class. Acts as a placeholder which defines the Handler interface. Handlers can optionally use Formatter instances to format records as desired. By default, no formatter is specified; in this case, the ‘raw’ message as determined by record.message is logged.

acquire()[source]

Acquire the I/O thread lock.

close()[source]

Tidy up any resources used by the handler.

This version removes the handler from an internal map of handlers, _handlers, which is used for handler lookup by name. Subclasses should ensure that this gets called from overridden close() methods.

createLock()[source]

Acquire a thread lock for serializing access to the underlying I/O.

emit(record)[source]

Do whatever it takes to actually log the specified logging record.

This version is intended to be implemented by subclasses and so raises a NotImplementedError.

flush()[source]

Ensure all logging output has been flushed.

This version does nothing and is intended to be implemented by subclasses.

format(record)[source]

Format the specified record.

If a formatter is set, use it. Otherwise, use the default formatter for the module.

get_name()[source]
handle(record)[source]

Conditionally emit the specified logging record.

Emission depends on filters which may have been added to the handler. Wrap the actual emission of the record with acquisition/release of the I/O thread lock. Returns whether the filter passed the record for emission.

handleError(record)[source]

Handle errors which occur during an emit() call.

This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method.

name
release()[source]

Release the I/O thread lock.

setFormatter(fmt)[source]

Set the formatter for this handler.

setLevel(level)[source]

Set the logging level of this handler. level must be an int or a str.

set_name(name)[source]
class logging.LogRecord(name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None, **kwargs)[source]

Bases: object

A LogRecord instance represents an event being logged.

LogRecord instances are created every time something is logged. They contain all the information pertinent to the event being logged. The main information passed in is in msg and args, which are combined using str(msg) % args to create the message field of the record. The record also includes information such as when the record was created, the source line where the logging call was made, and any exception information to be logged.

getMessage()[source]

Return the message for this LogRecord.

Return the message for this LogRecord after merging any user-supplied arguments with the message.

class logging.Logger(name, level=0)[source]

Bases: logging.Filterer

Instances of the Logger class represent a single logging channel. A “logging channel” indicates an area of an application. Exactly how an “area” is defined is up to the application developer. Since an application can have any number of areas, logging channels are identified by a unique string. Application areas can be nested (e.g. an area of “input processing” might include sub-areas “read CSV files”, “read XLS files” and “read Gnumeric files”). To cater for this natural nesting, channel names are organized into a namespace hierarchy where levels are separated by periods, much like the Java or Python package namespace. So in the instance given above, channel names might be “input” for the upper level, and “input.csv”, “input.xls” and “input.gnu” for the sub-levels. There is no arbitrary limit to the depth of nesting.

addHandler(hdlr)[source]

Add the specified handler to this logger.

callHandlers(record)[source]

Pass a record to all relevant handlers.

Loop through all handlers for this logger and its parents in the logger hierarchy. If no handler was found, output a one-off error message to sys.stderr. Stop searching up the hierarchy whenever a logger with the “propagate” attribute set to zero is found - that will be the last logger whose handlers are called.

critical(msg, *args, **kwargs)[source]

Log ‘msg % args’ with severity ‘CRITICAL’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.critical(“Houston, we have a %s”, “major disaster”, exc_info=1)

debug(msg, *args, **kwargs)[source]

Log ‘msg % args’ with severity ‘DEBUG’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.debug(“Houston, we have a %s”, “thorny problem”, exc_info=1)

error(msg, *args, **kwargs)[source]

Log ‘msg % args’ with severity ‘ERROR’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.error(“Houston, we have a %s”, “major problem”, exc_info=1)

exception(msg, *args, exc_info=True, **kwargs)[source]

Convenience method for logging an ERROR with exception information.

fatal(msg, *args, **kwargs)

Log ‘msg % args’ with severity ‘CRITICAL’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.critical(“Houston, we have a %s”, “major disaster”, exc_info=1)

findCaller(stack_info=False)[source]

Find the stack frame of the caller so that we can note the source file name, line number and function name.

getChild(suffix)[source]

Get a logger which is a descendant to this one.

This is a convenience method, such that

logging.getLogger(‘abc’).getChild(‘def.ghi’)

is the same as

logging.getLogger(‘abc.def.ghi’)

It’s useful, for example, when the parent logger is named using __name__ rather than a literal string.

getEffectiveLevel()[source]

Get the effective level for this logger.

Loop through this logger and its parents in the logger hierarchy, looking for a non-zero logging level. Return the first one found.

handle(record)[source]

Call the handlers for the specified record.

This method is used for unpickled records received from a socket, as well as those created locally. Logger-level filtering is applied.

hasHandlers()[source]

See if this logger has any handlers configured.

Loop through all handlers for this logger and its parents in the logger hierarchy. Return True if a handler was found, else False. Stop searching up the hierarchy whenever a logger with the “propagate” attribute set to zero is found - that will be the last logger which is checked for the existence of handlers.

info(msg, *args, **kwargs)[source]

Log ‘msg % args’ with severity ‘INFO’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.info(“Houston, we have a %s”, “interesting problem”, exc_info=1)

isEnabledFor(level)[source]

Is this logger enabled for level ‘level’?

log(level, msg, *args, **kwargs)[source]

Log ‘msg % args’ with the integer severity ‘level’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.log(level, “We have a %s”, “mysterious problem”, exc_info=1)

makeRecord(name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None)[source]

A factory method which can be overridden in subclasses to create specialized LogRecords.

manager = <logging.Manager object>
removeHandler(hdlr)[source]

Remove the specified handler from this logger.

root = <RootLogger root (WARNING)>
setLevel(level)[source]

Set the logging level of this logger. level must be an int or a str.

warn(msg, *args, **kwargs)[source]
warning(msg, *args, **kwargs)[source]

Log ‘msg % args’ with severity ‘WARNING’.

To pass exception information, use the keyword argument exc_info with a true value, e.g.

logger.warning(“Houston, we have a %s”, “bit of a problem”, exc_info=1)

class logging.LoggerAdapter(logger, extra)[source]

Bases: object

An adapter for loggers which makes it easier to specify contextual information in logging output.

critical(msg, *args, **kwargs)[source]

Delegate a critical call to the underlying logger.

debug(msg, *args, **kwargs)[source]

Delegate a debug call to the underlying logger.

error(msg, *args, **kwargs)[source]

Delegate an error call to the underlying logger.

exception(msg, *args, exc_info=True, **kwargs)[source]

Delegate an exception call to the underlying logger.

getEffectiveLevel()[source]

Get the effective level for the underlying logger.

hasHandlers()[source]

See if the underlying logger has any handlers.

info(msg, *args, **kwargs)[source]

Delegate an info call to the underlying logger.

isEnabledFor(level)[source]

Is this logger enabled for level ‘level’?

log(level, msg, *args, **kwargs)[source]

Delegate a log call to the underlying logger, after adding contextual information from this adapter instance.

manager
name
process(msg, kwargs)[source]

Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.

Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.

setLevel(level)[source]

Set the specified level on the underlying logger.

warn(msg, *args, **kwargs)[source]
warning(msg, *args, **kwargs)[source]

Delegate a warning call to the underlying logger.

class logging.NullHandler(level=0)[source]

Bases: logging.Handler

This handler does nothing. It’s intended to be used to avoid the “No handlers could be found for logger XXX” one-off warning. This is important for library code, which may contain code to log events. If a user of the library does not configure logging, the one-off warning might be produced; to avoid this, the library developer simply needs to instantiate a NullHandler and add it to the top-level logger of the library module or package.

createLock()[source]

Acquire a thread lock for serializing access to the underlying I/O.

emit(record)[source]

Stub.

handle(record)[source]

Stub.

class logging.StreamHandler(stream=None)[source]

Bases: logging.Handler

A handler class which writes logging records, appropriately formatted, to a stream. Note that this class does not close the stream, as sys.stdout or sys.stderr may be used.

emit(record)[source]

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

flush()[source]

Flushes the stream.

setStream(stream)[source]

Sets the StreamHandler’s stream to the specified value, if it is different.

Returns the old stream, if the stream was changed, or None if it wasn’t.

terminator = '\n'
logging.addLevelName(level, levelName)[source]

Associate ‘levelName’ with ‘level’.

This is used when converting levels to text during message formatting.

logging.basicConfig(**kwargs)[source]

Do basic configuration for the logging system.

This function does nothing if the root logger already has handlers configured. It is a convenience method intended for use by simple scripts to do one-shot configuration of the logging package.

The default behaviour is to create a StreamHandler which writes to sys.stderr, set a formatter using the BASIC_FORMAT format string, and add the handler to the root logger.

A number of optional keyword arguments may be specified, which can alter the default behaviour.

filename Specifies that a FileHandler be created, using the specified
filename, rather than a StreamHandler.
filemode Specifies the mode to open the file, if filename is specified
(if filemode is unspecified, it defaults to ‘a’).

format Use the specified format string for the handler. datefmt Use the specified date/time format. style If a format string is specified, use this to specify the

type of format string (possible values ‘%’, ‘{‘, ‘$’, for %-formatting, str.format() and string.Template - defaults to ‘%’).

level Set the root logger level to the specified level. stream Use the specified stream to initialize the StreamHandler. Note

that this argument is incompatible with ‘filename’ - if both are present, ‘stream’ is ignored.
handlers If specified, this should be an iterable of already created
handlers, which will be added to the root handler. Any handler in the list which does not have a formatter assigned will be assigned the formatter created in this function.

Note that you could specify a stream created using open(filename, mode) rather than passing the filename and mode in. However, it should be remembered that StreamHandler does not close its stream (since it may be using sys.stdout or sys.stderr), whereas FileHandler closes its stream when the handler is closed.

Changed in version 3.2: Added the style parameter.

Changed in version 3.3: Added the handlers parameter. A ValueError is now thrown for incompatible arguments (e.g. handlers specified together with filename/filemode, or filename/filemode specified together with stream, or handlers specified together with stream.

logging.captureWarnings(capture)[source]

If capture is true, redirect all warnings to the logging package. If capture is False, ensure that warnings are not redirected to logging but to their original destinations.

logging.critical(msg, *args, **kwargs)[source]

Log a message with severity ‘CRITICAL’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.debug(msg, *args, **kwargs)[source]

Log a message with severity ‘DEBUG’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.disable(level=50)[source]

Disable all logging calls of severity ‘level’ and below.

logging.error(msg, *args, **kwargs)[source]

Log a message with severity ‘ERROR’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.exception(msg, *args, exc_info=True, **kwargs)[source]

Log a message with severity ‘ERROR’ on the root logger, with exception information. If the logger has no handlers, basicConfig() is called to add a console handler with a pre-defined format.

logging.fatal(msg, *args, **kwargs)

Log a message with severity ‘CRITICAL’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.getLevelName(level)[source]

Return the textual representation of logging level ‘level’.

If the level is one of the predefined levels (CRITICAL, ERROR, WARNING, INFO, DEBUG) then you get the corresponding string. If you have associated levels with names using addLevelName then the name you have associated with ‘level’ is returned.

If a numeric value corresponding to one of the defined levels is passed in, the corresponding string representation is returned.

Otherwise, the string “Level %s” % level is returned.

logging.getLogger(name=None)[source]

Return a logger with the specified name, creating it if necessary.

If no name is specified, return the root logger.

logging.getLoggerClass()[source]

Return the class to be used when instantiating a logger.

logging.info(msg, *args, **kwargs)[source]

Log a message with severity ‘INFO’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.log(level, msg, *args, **kwargs)[source]

Log ‘msg % args’ with the integer severity ‘level’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.makeLogRecord(dict)[source]

Make a LogRecord whose attributes are defined by the specified dictionary, This function is useful for converting a logging event received over a socket connection (which is sent as a dictionary) into a LogRecord instance.

logging.setLoggerClass(klass)[source]

Set the class to be used when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__()

logging.shutdown(handlerList=[<weakref at 0x7fd8a1b494f8; to '_StderrHandler'>, <weakref at 0x7fd89ea41778; to 'NewLineStreamHandlerPY3'>, <weakref at 0x7fd89ea41868; to 'WarningStreamHandler'>, <weakref at 0x7fd89ea41958; to 'StreamHandler'>, <weakref at 0x7fd89dfc7638; to 'NullHandler'>, <weakref at 0x7fd89de3da98; to 'NullHandler'>, <weakref at 0x7fd89dafdc28; to 'MemoryHandler'>])[source]

Perform any cleanup actions in the logging system (e.g. flushing buffers).

Should be called at application exit.

logging.warn(msg, *args, **kwargs)[source]
logging.warning(msg, *args, **kwargs)[source]

Log a message with severity ‘WARNING’ on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format.

logging.getLogRecordFactory()[source]

Return the factory to be used when instantiating a log record.

logging.setLogRecordFactory(factory)[source]

Set the factory to be used when instantiating a log record.

Parameters:factory – A callable which will be called to instantiate

a log record.

This is an interface to Python’s internal parser.

exception parser.ParserError

Bases: Exception

parser.STType

alias of parser.st

parser.compilest()

Compiles an ST object into a code object.

parser.expr()

Creates an ST object from an expression.

parser.isexpr()

Determines if an ST object was created from an expression.

parser.issuite()

Determines if an ST object was created from a suite.

parser.sequence2st()

Creates an ST object from a tree representation.

parser.st2list()

Creates a list-tree representation of an ST.

parser.st2tuple()

Creates a tuple-tree representation of an ST.

parser.suite()

Creates an ST object from a suite.

parser.tuple2st()

Creates an ST object from a tree representation.