Page tree
Skip to end of metadata
Go to start of metadata

Java Logging

Logging with Spring AOP (before/after) - work I did in ONAP -

Logback in spring boot -

Logging With Spring AOP

Prototyped AOP advice - minimal client changes - just an aspect bean and annotations required

import javax.servlet.http.HttpServletRequest;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
public class ApplicationService implements ApplicationServiceLocal {
    public Boolean health(HttpServletRequest servletRequest) {
    	Boolean health = true;
    	// TODO: check database
    	// Log outside the AOP framework - to simulate existing component logs between the ENTRY/EXIT markers
    	LoggerFactory.getLogger(this.getClass()).info("Running /health");
    	return health;

Aspect References

package cloud.obrienlabs.demo.logging;
import javax.servlet.http.HttpServletRequest;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import cloud.obrienlabs.logging.LogAdapter;
import org.slf4j.LoggerFactory;

public class LoggingAspect {
    @Before("execution(* cloud.obrienlabs.*.*(..))")
    public void logBefore(JoinPoint joinPoint) {
        Object[] args = joinPoint.getArgs();
        Object servletRequest = args[0];
        LogAdapter.HttpServletRequestAdapter requestAdapter = 
                new LogAdapter.HttpServletRequestAdapter((HttpServletRequest)servletRequest);
        final LogAdapter adapter = new LogAdapter(
        try {
        } finally {

    @After("execution(* cloud.obrienlabs.*.*(..))")
    public void logAfter(JoinPoint joinPoint) {
        final LogAdapter adapter = new LogAdapter(

Logging Results

Use Case: Single REST call - with ENTRY/EXIT Markers around in-method log

The key here is that you get logs for free - the entry/exit lines are generated - the line in the middle is from java application code.

ELK Stack

Log ELK Stack Kubernetes Troubleshooting

As an aside - at the Linux Foundation we had a similar issue where the logging system caused cluster instability - this was way back in K8S 1.7 before we used resource limits properly. In our case the ELK stack - specifically the logstash container would saturate it's particular EC2 due to excessive logging from not just healthcheck logs but other rogue containers like ones configured as a daemonset running all cores processing jobs in parallel (essentially using all the cores on the worker nodes).
Anyway one workaround from avoiding pod rescheduling in that case was limiting the vCores directly the logstash native config (outside of the chart resource config) - to slow down indexing per node

In our case only a minority of pods in the DBaaS at the top of the 200 pod deployment were the the cause of the excessive indexing

Takeaway: the common ELK service itself needed resource limits - to handle peak app traffic

  • No labels