21st century logging
I think logging should get more attention than we are currently giving it. When designing an application a great deal of effort goes into modelling the customer business logic, making sure all use cases are covered and handled properly. The business model is mapped to a persistence storage (be at a RDBMS or a NoSQL solution), frameworks are being chosen: web, middle-ware, batch jobs, and probably SLF4J with log4j or logback.
This has been the case of almost all applications I’ve been involved with, and logging was always a second class citizen, relying on good old string logging frameworks for that.
But recently I come to realize there is much more to logging than the current string based logging systems. Especially if my system gets deployed in the cloud and takes advantage of auto-scaling, then gathering text files and aggregating them to a common place smells like hacking.
In my latest application we implemented a notification mechanism that holds more complex information since the String based log wasn’t sufficient. I have to thank one of my colleagues I work with who opened my eyes when saying “Notifications are at the heart of our application”. I haven’t ever thought of logging as the heart of any application. Business Logic is the heart of the application, not logging. But there is a lot of truth in his words, since you can’t deploy something without a good mechanism of knowing if your system is actually doing what it was meant for.
So my notifications are complex objects (debug ones having less data than error ones) and a NoSQL document database is a perfect store for our logs. A notification contains all sorts of data:
- the current executing job,
- the source of data,
- the component where the log originated,
- exceptions being thrown,
- input arguments,
- the message history of the Spring Integration Message carrying our request.
Therefore, since I am able to store complex objects in a schema-less fashion, I am also able to query logs, and it doesn’t matter the order they arrive since I can order them by source and creation time. I can have a scheduled job generating alerts and reports when too many error entries are detected.
This is a custom-built logging implementation as we haven’t been using a dedicated framework for our notifications, but I get more value out of it than from the classic String based log files.
I still think log4j and logback are very good implementations and we haven’t replaced them, we’ve only added an extra logging feature to overcome their limitations, but even with the new logback appenders, I still think the current String based logs are way too simple for production systems requirements. And if you use them more for debugging purposes, while having additional monitoring solutions for production environments, then maybe it’s time to use a smart logging solution that works for both development and production environments too.
If that was difficult to implement 10 years ago, when RDBMS ruled the storage world, and file based logging was a good trade-off, I think we have means for implementing better logging frameworks now. The current “string-based file logging” model might have been sufficient especially when our server was scaling vertically on a single machine, but in a world of many horizontally distributed servers, this model requires extra processing.
Big players are already employing such new generation logging systems Facebook Scribe, and LinkedIn Kafka log processing.
I really liked the LinkedIn solution, and it inspires me to reason about a new logging system working in a CQRS fashion, where log entries are events stored into a log database, and each event passes through a chain of handlers updating the current system state. This combines both logging and monitoring, and the monitoring commands go directly to a cached latest system state representation, which holds:
- alerts,
- status reports
- monitoring views of the current system status
How does it sound to you, is it worth implementing such solution, should we start a new open source new generation logging project?
That sounds awesome! Then you would have it all in one scope, logs, status and life cycle of the application.
Why do the work again?
Have you checked out graylog2? If it is missing something, we would very much like to hear what it is.
Be sure to look at the preview versions if you do it at all.
Disclaimer, I’m working on it, also commercially, but it is free and open source software.
Hi,
Graylog2 sounds really interesting. I was thinking of something like it, since the storage is in the cloud and it also offers an interface to monitor and analyze. I like it because other replies pointed to using Hadoop, but that’s too much for simple to medium size apps.
Vlad