(Comments)
Logging is a very important aspect of software development and maintenance. It helps out the developer point out the bugs, errors, point of failures and potential vulnerabilities of an application. Thus it was but obvious that Django comes bundled with its own logging module.
Django uses Python's builtin logging module to perform system logging. Hence we can find the detailed documentation of the same in Python's own docs as well as Django's extra awesome docs. But for the sake of it, let's get to know Python's logging framework before we get into its usage in Django.
A Python logging configuration consists of four parts:
A Logger is an entry point into the logging system. You can consider them as containers into which messages or logs can be pushed for processing.
A Logger has different log levels. These levels describes the gravity of the message that the logger has to handle. The Different log levels that can be assigned to messages are as follows:
Each message that is written to the logger is a Log Record. Each log record also has a log level indicating the severity of that specific message. A log record can also contain useful metadata that describes the event that is being logged. This can include details such as a stack trace or an error code.
When a message is given to the logger, the log level of the message is compared to the log level of the logger. If the log level of the message meets or exceeds the log level of the logger itself, the message will undergo further processing. If it doesn’t, the message will be ignored.
Once the Logger is done with the message, it passes the message to a Handler
The handler is the brains, which determines the action for each message in the logger. It decides whether to write the message to the console, or file or to a network stream.
Just like the loggers, handlers also have a log level. The handler checks if a particular log record meets or exceeds handler's own level; if it does not then the handler gracefully ignores the message and moves on.
Multiple handlers can be registered for a logger, with different log level. Thus we could achieve different notification pathway depending on the gravity of the log message.
For eg: we could shoot a mail to the web admin for logs with levels ERROR or CRITICAL, while a second handler for appending the message to a file with log levels other than the former ones, for later inspection and analysis.
As the name suggests, a Filter checks for certain conditions and then executes an appropriate choice of option as needed.
The default mode of handling is by the log level of the log message. However, by installing a filter, we can place additional filtering and criteria on logging process, thus it provides additional control over which log records are passed from logger to handler.
For eg: if we wish to omit ERROR messages originating from some particular source, from being sent as mail to web master.
Filters can also be used to modify the logging record prior to being emitted. For example, you could write a filter that downgrades ERROR log records to WARNING records if a particular set of criteria are met.
Filters can be installed on loggers or on handlers; multiple filters can be used in a chain to perform multiple filtering actions.
Ultimately, a log record needs to be rendered as text. Formatters describe the exact format of that text. A formatter usually consists of a Python formatting string containing LogRecord attributes; however, you can also write custom formatters to implement specific formatting behavior.
After configuring our loggers, handlers, filters and formatters (Explained later), we need to place logging calls into our code at appropriate locations. Using logging framework is simple, as follows:
# Python Code # import the logging library import logging # Get an instance of a logger logger = logging.getLogger(__name__) def my_view(request, arg1, arg): # Do something here... # yes anything! if bad_apples: # Log an error message logger.error('Some bad apples in the pile!!!')
That's all! Every time bad_apples condition is activated, a log record with log level ERROR will be written (to a location, mentioned in the handler).
The logging.getLogger( _ name _ )
command obtains (creating first, if necessary) an instance of a logger. The _ name _
argument is the name of the python module, which will help you location the module very easily, but if you have a better naming scheme, to identify your logger, you could use that too (as a dot-separated string).
# Python Code # Get an instance of a specific named logger logger = logging.getLogger('universe.galaxy.star')
The dotted paths of logger names define a hierarchy. The universe.galaxy
logger is considered to be a parent of the universe.galaxy.star
logger; the universe logger is a parent of the universe.galaxy
logger.
Why is the hierarchy important? Well, because loggers can be set to propagate their logging calls to their parents. In this way, you can define a single set of handlers at the root of a logger tree, and capture all logging calls in the subtree of loggers. A logging handler defined in the universe
namespace will catch all logging messages issued on the universe.galaxy
and universe.galaxy.star
loggers.
This propagation can be controlled on a per-logger basis. If you don’t want a particular logger to propagate to its parents, you can turn off this behaviour.
The logger instance contains an entry method for each of the default log levels:
logger.debug()
logger.info()
logger.warning()
logger.error()
logger.critical()
There are two other logging calls available:
logger.log()
: Manually emits a logging message with a specific log level.
logger.exception()
: Creates an ERROR level logging message wrapping the current exception stack frame.
Just putting the logging calls in your code is not enough. We need to configure the loggers, handlers, filters and formatters that we explained above too. Because these decide the action to be taken and the formatting of the message to be displayed/written to the appropriate output.
Python’s logging library provides several techniques to configure logging, ranging from a programmatic interface to configuration files. By default, Django uses the dictConfig format.
In order to configure logging, you use LOGGING to define a dictionary of logging settings. These settings describes the loggers, handlers, filters and formatters that you want in your logging setup, and the log levels and other properties that you want those components to have.
By default, the LOGGING setting is merged with Django’s default logging configuration using the following scheme.
If the disable_existing_loggers key in the LOGGING dictConfig is set to True (which is the default) then all loggers from the default configuration will be disabled. Disabled loggers are not the same as removed; the logger will still exist, but will silently discard anything logged to it, not even propagating entries to a parent logger. Thus you should be very careful using 'disable_existing_loggers': True; it’s probably not what you want. Instead, you can set disable_existing_loggers to False and redefine some or all of the default loggers; or you can set LOGGING_CONFIG to None and handle logging config yourself.
Logging is configured as part of the general Django setup() function. Therefore, you can be certain that loggers are always ready for use in your project code.
The following example is just to give you a taste of the dictConfig format. For a detailed documentation of the same please visit the dictConfig format documentation.
# Python Code # myproject/myproject/settings.py # LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'file': { 'level': 'DEBUG', 'class': 'logging.FileHandler', 'filename': '/path/to/django/debug.log', }, }, 'loggers': { 'django': { 'handlers': ['file'], 'level': 'DEBUG', 'propagate': True, }, }, }
PS: Don't forget to change the /path/to/django/
to a location where the django application has permission to write and modify files.
We develop web applications to our customers using python/django/angular.
Contact us at hello@cowhite.com
Comments