1.关于日志
在项目中,无论是开发还是运维都需要日志,日志是不可缺少的一部分。记录系统的运行状态,错误信息,重要事件,为开发者提供宝贵的调试和分析条件。
2.日志级别
2.1 DEBUG:
- 调试级别,用于输出详细的调试信息,通常在开发和测试阶段使用。
2.2 INFO:
- 信息级别,用于输出一般性的信息,表示系统正常运行。
2.3 WARNING:
- 警告级别,用于输出可能的问题或异常情况,但不会影响系统的正常运行。
2.4 ERROR:
- 错误级别,用于输出严重的错误信息,可能会影响系统的正常运行。
2.5 CRITICAL:
- 严重错误级别,用于输出非常严重的错误信息,通常会导致系统崩溃或无法继续运行。
CRITICAL > ERROR > WARNING > INFO >DEBUG
3.关于springboot日志设置
3.1 日志设置需求
- 关于springBoot,mybatis,log等框架不在这里展开,自行搭建
- 上面已经介绍过日志的隔离级别,但实际开发中大部分用到就是info,error 这两种
- 主要分享下sql日志的打印记录设置在info中
3.2 关于logback-spring.xml
- 实际业务开发调试中,大部分用到的是info,error。有sql打印则会更好,
- 下面的logback-spring 设置,共分了 info,error,debug等三个等级,日志分别记录在三个不同目录中
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds" debug="false">
<contextName>logback</contextName>
<property name="log.path" value="log"/>
<property name="console_log_pattern"
value="%black(%contextName-) %red(%d{yyyy-MM-dd HH:mm:ss}) %green([%thread]) %green([%X{traceId}]) %highlight(%-5level) %boldMagenta(%logger{36}) - %blue(%msg%n)"/>
<property name="charset" value="UTF-8"/>
<!--输出到控制台-->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<encoder>
<pattern>${console_log_pattern}</pattern>
</encoder>
</appender>
<!--输出到文件,只记录INFO级别信息-->
<appender name="info_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_info/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/info/info-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>INFO</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="warn_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_warn/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/warn/warn-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录info级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>WARN</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="error_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_error/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/error/error-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录ERROR级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!--输出到文件,只记录INFO级别信息-->
<appender name="debug_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_debug/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/debug/debug-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>DEBUG</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<root level="info">
<appender-ref ref="console"/>
<appender-ref ref="info_file"/>
<appender-ref ref="warn_file"/>
<appender-ref ref="error_file"/>
<appender-ref ref="debug_file"/>
</root>
<logger name="com.xxx.xxx.mapper" level="DEBUG"/>
</configuration>
看看效果与目录,两张图的内容分别是dubug目录下日志,info目录下的日志选取
sql日志的打印是在debug中,并非在info中。目的是想要sql打印日志是在info中
在logback中的设置,sql打印的日志必然是**debug**中
3.3 SqlLogInterceptor
- springBoot,mysql等整和,自行处理。下面是mysql的druid驱动配置等,并非完整,仅供参考
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
#driver-class-name: org.postgresql.Driver
#driver-class-name: oracle.jdbc.OracleDriver
#driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
#driver-class-name: dm.jdbc.driver.DmDriver
druid:
# MySql、PostgreSQL、SqlServer、DaMeng校验
validation-query: select 1
# Oracle校验
#validation-query: select 1 from dual
validation-query-timeout: 2000
initial-size: 5
max-active: 20
min-idle: 5
max-wait: 60000
test-on-borrow: false
test-on-return: false
test-while-idle: true
time-between-eviction-runs-millis: 60000
min-evictable-idle-time-millis: 300000
stat-view-servlet:
enabled: true
web-stat-filter:
enabled: true
url-pattern: /*
exclusions: '*.js,*.gif,*.jpg,*.bmp,*.png,*.css,*.ico,/druid/*'
session-stat-enable: true
session-stat-max-count: 10
- SqlLogInterceptor 的实现代码,这里采用的info进行打印sql日志,采用统一打印格式。
@Slf4j
@Intercepts({
@Signature(type = StatementHandler.class, method = "query", args = {Statement.class, ResultHandler.class}),
@Signature(type = StatementHandler.class, method = "update", args = Statement.class),
@Signature(type = StatementHandler.class, method = "batch", args = Statement.class)
})
@Component
public class SqlLogInterceptor implements Interceptor {
private static final String DRUID_POOLED_PREPARED_STATEMENT = "com.alibaba.druid.pool.DruidPooledPreparedStatement";
private static final String T4C_PREPARED_STATEMENT = "oracle.jdbc.driver.T4CPreparedStatement";
private static final String ORACLE_PREPARED_STATEMENT_WRAPPER = "oracle.jdbc.driver.OraclePreparedStatementWrapper";
private Method oracleGetOriginalSqlMethod;
private Method druidGetSqlMethod;
@Override
public Object intercept(Invocation invocation) throws Throwable {
Statement statement;
Object firstArg = invocation.getArgs()[0];
if (Proxy.isProxyClass(firstArg.getClass())) {
statement = (Statement) SystemMetaObject.forObject(firstArg).getValue("h.statement");
} else {
statement = (Statement) firstArg;
}
MetaObject stmtMetaObj = SystemMetaObject.forObject(statement);
try {
statement = (Statement) stmtMetaObj.getValue("stmt.statement");
} catch (Exception e) {
// do nothing
}
if (stmtMetaObj.hasGetter("delegate")) {
//Hikari
try {
statement = (Statement) stmtMetaObj.getValue("delegate");
} catch (Exception ignored) {
}
}
String originalSql = null;
String stmtClassName = statement.getClass().getName();
if (DRUID_POOLED_PREPARED_STATEMENT.equals(stmtClassName)) {
try {
if (druidGetSqlMethod == null) {
Class<?> clazz = Class.forName(DRUID_POOLED_PREPARED_STATEMENT);
druidGetSqlMethod = clazz.getMethod("getSql");
}
Object stmtSql = druidGetSqlMethod.invoke(statement);
if (stmtSql instanceof String) {
originalSql = (String) stmtSql;
}
} catch (Exception e) {
e.printStackTrace();
}
} else if (T4C_PREPARED_STATEMENT.equals(stmtClassName)
|| ORACLE_PREPARED_STATEMENT_WRAPPER.equals(stmtClassName)) {
try {
if (oracleGetOriginalSqlMethod != null) {
Object stmtSql = oracleGetOriginalSqlMethod.invoke(statement);
if (stmtSql instanceof String) {
originalSql = (String) stmtSql;
}
} else {
Class<?> clazz = Class.forName(stmtClassName);
oracleGetOriginalSqlMethod = getMethodRegular(clazz, "getOriginalSql");
if (oracleGetOriginalSqlMethod != null) {
//OraclePreparedStatementWrapper is not a public class, need set this.
oracleGetOriginalSqlMethod.setAccessible(true);
if (null != oracleGetOriginalSqlMethod) {
Object stmtSql = oracleGetOriginalSqlMethod.invoke(statement);
if (stmtSql instanceof String) {
originalSql = (String) stmtSql;
}
}
}
}
} catch (Exception e) {
//ignore
}
}
if (originalSql == null) {
originalSql = statement.toString();
}
originalSql = originalSql.replaceAll("[\\s]+", StringPool.SPACE);
int index = indexOfSqlStart(originalSql);
if (index > 0) {
originalSql = originalSql.substring(index);
}
// 计算执行 SQL 耗时
long start = SystemClock.now();
Object result = invocation.proceed();
long timing = SystemClock.now() - start;
// SQL 打印执行结果
Object target = PluginUtils.realTarget(invocation.getTarget());
MetaObject metaObject = SystemMetaObject.forObject(target);
MappedStatement ms = (MappedStatement) metaObject.getValue("delegate.mappedStatement");
// 打印 sql
log.info(
format(
"\n============== Sql Start ==============" +
"\nExecute ID :{}" +
"\nExecute SQL :{}" +
"\nExecute Time:{} ms" +
"\n============== Sql End ==============\n",
ms.getId(), originalSql, timing));
return result;
}
@Override
public Object plugin(Object target) {
if (target instanceof StatementHandler) {
return Plugin.wrap(target, this);
}
return target;
}
/**
* 获取此方法名的具体 Method
*
* @param clazz class 对象
* @param methodName 方法名
* @return 方法
*/
private Method getMethodRegular(Class<?> clazz, String methodName) {
if (Object.class.equals(clazz)) {
return null;
}
for (Method method : clazz.getDeclaredMethods()) {
if (method.getName().equals(methodName)) {
return method;
}
}
return getMethodRegular(clazz.getSuperclass(), methodName);
}
/**
* 获取sql语句开头部分
*
* @param sql ignore
* @return ignore
*/
private int indexOfSqlStart(String sql) {
String upperCaseSql = sql.toUpperCase();
Set<Integer> set = new HashSet<>();
set.add(upperCaseSql.indexOf("SELECT "));
set.add(upperCaseSql.indexOf("UPDATE "));
set.add(upperCaseSql.indexOf("INSERT "));
set.add(upperCaseSql.indexOf("DELETE "));
set.remove(-1);
if (CollectionUtils.isEmpty(set)) {
return -1;
}
List<Integer> list = new ArrayList<>(set);
list.sort(Comparator.naturalOrder());
return list.get(0);
}
/**
* 同 log 格式的 format 规则
* <p>
* use: format("my name is {}, and i like {}!", "L.cm", "Java")
*
* @param message 需要转换的字符串
* @param arguments 需要替换的变量
* @return 转换后的字符串
*/
public String format(@Nullable String message, @Nullable Object... arguments) {
// message 为 null 返回空字符串
if (message == null) {
return StringPool.EMPTY;
}
// 参数为 null 或者为空
if (arguments == null || arguments.length == 0) {
return message;
}
StringBuilder sb = new StringBuilder((int) (message.length() * 1.5));
int cursor = 0;
int index = 0;
int argsLength = arguments.length;
for (int start, end; (start = message.indexOf(StringPool.LEFT_BRACE, cursor)) != -1 && (end = message.indexOf(StringPool.RIGHT_BRACE, start)) != -1 && index < argsLength; ) {
sb.append(message, cursor, start);
sb.append(arguments[index]);
cursor = end + 1;
index++;
}
sb.append(message.substring(cursor));
return sb.toString();
}
}
- 调整后的logback-spring,去掉了debug,sql的打印日志在info中
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds" debug="false">
<contextName>logback</contextName>
<property name="log.path" value="log"/>
<property name="console_log_pattern"
value="%black(%contextName-) %red(%d{yyyy-MM-dd HH:mm:ss}) %green([%thread]) %green([%X{traceId}]) %highlight(%-5level) %boldMagenta(%logger{36}) - %blue(%msg%n)"/>
<property name="charset" value="UTF-8"/>
<!--输出到控制台-->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<encoder>
<pattern>${console_log_pattern}</pattern>
</encoder>
</appender>
<!--输出到文件,只记录INFO级别信息-->
<appender name="info_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_info/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/info/info-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>INFO</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="warn_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_warn/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/warn/warn-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录info级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>WARN</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="error_file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/roll_error/logback.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<!-- <pattern>${console_log_pattern}</pattern>-->
<pattern>%n%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] [%logger{50}] %n%-5level: %msg%n</pattern>
<charset>${charset}</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/error/error-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>50MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>30</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录ERROR级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<root level="info">
<appender-ref ref="console"/>
<appender-ref ref="info_file"/>
<appender-ref ref="warn_file"/>
<appender-ref ref="error_file"/>
</root>
</configuration>
3.4 看看info中效果
- 目录依旧保留了info,debug,error,warn
- 这是截取控制的日志,部分马赛克,红框中的sql打印 显示的日志级别就是 info
4.关于实现sql日志打印方式
- 实现sql打印日志方式是有多种,上面的实现最先是通过mybatis日志打印,是debug级别,后通过代码方式,实现mybatis的拦截器,sql打印调整为info级别
- 但也有去实现p6spy 的方式