ELK 是一套整体解决方案,是三个软件产品的首字母缩写
Elasticsearch: 负责日志检索和存储
Logstash: 负责日志的收集和分析,处理
Kibana: 负责日志的可视化
功能:
分布式日志数据集中式查询和管理
系统监控,包含系统硬件和应用各个组件的监控
故障排查
安全信息和事件管理
报表功能
---------------------------------------------------------------------------------------
资源下载地址和提取码: 邮件发送到974907350@qq.com
----------------------------------------------------------------------------------------
环境: Centos7
关闭Iptables
关闭Selinux
部署规划:
192.168.1.251 部署 Kibana 公网IP: 139.9.85.6
192.168.1.242 部署 Elasticsearch
192.168.1.250 部署 Logstash
一、 JDK1.8环境搭建
(1)JDK二进制安装:
Jdk1.8二进制包下载路径http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[root@elasticsearch01 ~] tar -zxf jdk-8u201-linux-x64.tar.gz
[root@elasticsearch01 ~]mv jdk1.8.0_201 /usr/local/ # 解压到对应安装目录/usr/local/
[root@elasticsearch01 ~] /usr/local/jdk1.8.0_201/bin/java -version #验证安装
(2)配置Java环境变量/etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_201/
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH
[root@elasticsearch01 ~] source /etc/profile
[root@elasticsearch01 ~] java -version #验证环境变量
二、Elasticsearch 安装
(1)Elasticsearch安装:(Elasticsearch的tar包是已经编译好的,下载后直接使用即可)
[root@elasticsearch01 ~] tar -zxf elasticsearch-6.6.0.tar.gz
[root@elasticsearch01 ~] mv elasticsearch-6.6.0 /usr/local/
(2)Elasticsearch配置文件修改:
[root@elasticsearch01 ~]cp /usr/local/elasticsearch-6.6.0/config/elasticsearch.yml /usr/local/elasticsearch-6.6.0/config/elasticsearch.yml-bak
[root@elasticsearch01 ~]vim /usr/local/elasticsearch-6.6.0/config/elasticsearch.yml
# 数据目录
path.data: /usr/local/elasticsearch-6.6.0/data
# log 目录
path.logs: /usr/local/elasticsearch-6.6.0/logs
#监听地址
network.host: 0.0.0.0 #监听在任意地址
#监听端口
http.port: 9200
(3)JVM的内存限制更改:(内存不大,最好不要做限制,默认大小为1G内存)
[root@elasticsearch01 ~] vim /usr/local/elasticsearch-6.6.0/config/jvm.options
-Xms2048M
-Xmx2048M
(4)句柄数,进程数修改:
最大文件打开数调整
[root@elasticsearch01 ~] vim /etc/security/limits.conf
* - nofile 65536
最大打开进程数调整
[root@elasticsearch01 ~] vim /etc/security/limits.d/20-nproc.conf
* - nproc 10240
修改后添加如下一行,lougout 才能生效
[root@elasticsearch01 ~] vim /etc/pam.d/login
session required /lib/security/pam_limits.so
(5)修改内核参数调整
[root@elasticsearch01 ~]# Vim /etc/sysctl.conf
vm.max_map_count = 262144
[root@elasticsearch01 ~]# sysctl -p
(6) 创建启动 用户elk:
[root@elasticsearch01 ~] useradd -s /sbin/nologin elk
[root@elasticsearch01 ~] chown -R elk:elk /usr/local/elasticsearch-6.6.0/
[root@elasticsearch01 ~] su - elk -s /bin/bash
[root@elasticsearch01 ~]# nohup /usr/local/elasticsearch-6.6.0/bin/elasticsearch &
(7)验证启动是否成功:
[root@elasticsearch01 ~]#curl http://192.168.1.242 :9200
{
"name" : "6CbRLkm",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "NINwpaL_ReWp75be3QDV7g",
"version" : {
"number" : "6.6.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "a9861f4",
"build_date" : "2019-01-24T11:27:09.439740Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
(6)注意事项:
Elasticsearch启动注意
Elasticsearch如果监听在127.0.0.1的话,可以正常启动成功
Elasticsearch如果要跨机器通讯,需要监听在真实网卡上,监听在0.0.0.0或者内网地址
需要调整系统参数才能正常启动!!!!很重要
Elasticsearch监听网卡建议
19. 如果学习,建议监听在127.0.0.1
20. 如果是云服务器的话,一定把9200和9300公网入口在安全组限制一下
21. 自建机房的话,建议监听在内网网卡,监听在公网会被入侵
三、Kibana安装
(1)、Kibana安装:
[root@elasticsearch01 ~]# tar -zxf kibana-6.6.0-linux-x86_64.tar.gz
[root@elasticsearch01 ~]# mv kibana-6.6.0-linux-x86_64 /usr/local/kibana-6.6.0
(2)、修改Kibana配置文件:
[root@elasticsearch01 ~]# cp /usr/local/kibana-6.6.0/config/kibana.yml /usr/local/kibana-6.6.0/config/kibana.yml-bak
[root@elasticsearch01 ~]# vim /usr/local/kibana-6.6.0/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.242:9200" #或者写 elasticsearch.hosts: ["http://192.168.1.242:9200"]
kibana.index: ".kibana"
logging.dest: /tmp/kibana.log #日志存放路径
(3) 创建日志文件:
[root@elasticsearch01 ~]# touch /tmp/kibana.log
[root@elasticsearch01 ~]# chmod 777 /tmp/kibana.log
(4)Kibana的启动和访问
后台启动Kibana:nohup /usr/local/kibana-6.6.0/bin/kibana
访问Kibana,需要开放5601端口
(5)Kibana借用Nginx来实现简单认证
Kibana的安全说明
默认无密码,也是谁都能够访问
如果使用云厂商,可以在安全组控制某个IP的访问
建议借用Nginx实现用户名密码登录
1、部署Nginx,使用Nginx来转发
Nginx编译安装
[root@kibana01 ~]# yum install -y lrzsz wget gcc gcc-c++ make pcre pcre-devel zlib zlib-devel
[root@kibana01 ~]# wget 'http://nginx.org/download/nginx-1.14.2.tar.gz'
[root@kibana01 ~]# tar -zxvf nginx-1.14.2.tar.gz
[root@kibana01 ~]# cd nginx-1.14.2
[root@kibana01 ~]# useradd -s /sbin/nologin nginx
./configure --prefix=/usr/local/nginx && make && make install
Nginx环境变量设置
[root@kibana01 ~]# vim /etc/profile
export PATH=$PATH:/usr/local/nginx/sbin/
[root@kibana01 ~]# source /etc/profile #使环境变量生效
Nginx两种限制:
限制源IP访问,比较安全,访问的IP得不变
使用用户名密码的方式,通用
修改nginx 配置文件
写在location 中
root@kibana01 ~]# vim /usr/local/nginx/conf/nginx.conf
.. ..
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
auth_basic "Input Password:"; ##指定认证域的名字,随便写
auth_basic_user_file "/usr/local/nginx/conf/pass"; ##认证的密码文件,写绝对路径
proxy_pass http://192.168.1.242:5601;
proxy_read_timeout 60;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
2)生成密码文件,创建用户及密码
使用htpasswd命令创建账户文件,需要确保系统中已经安装了httpd-tools。
[root@kibana01 ~]# yum -y install httpd-tools
[root@kibana01 ~]# htpasswd -c /usr/local/nginx/pass tom ##创建密码文件,-C 是指定创建密码文件 ,连续两次输入密码,初次创建用户必须加
New password:
Re-type new password:
Adding password for user tom
[root@kibana01 ~]# htpasswd /usr/local/nginx/pass jerry ##追加用户,不能使用-c选项 不然会替换掉原先的用户密码
New password:
Re-type new password:
Adding password for user jerry
[root@kibana01 ~]# cat /usr/local/nginx/pass
访问测试: http://139.9.85.6 #公网地址访问——进行用户认证——跳转到kibana页面
四、Logstash安装
依赖于Java环境
(1)JDK二进制安装:
Jdk1.8二进制包下载路径http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[root@logstash01 ~]# tar -zxf jdk-8u201-linux-x64.tar.gz
[root@logstash01 ~]#mv jdk1.8.0_201 /usr/local/ # 解压到对应安装目录/usr/local/
[root@logstash01 ~]# /usr/local/jdk1.8.0_201/bin/java -version #验证安装
(2)配置Java环境变量/etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_201/
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH
[root@logstash01 ~]# source /etc/profile
[root@logstash01 ~]# java -version #验证环境变量
(3)安装Logstash:
[root@logstash01 ~]# tar -zxf logstash-6.6.0.tar.gz
[root@logstash01 ~]# mv logstash-6.6.0 /usr/local/
(4) 配置Logstash的JVM配置文件(内存不大,最好不要做限制,默认大小为1G内存)
[root@logstash01 ~]# vim /usr/local/logstash-6.6.0/config/jvm.options
-Xms200M
-Xmx200M
Logstash支持
Logstash分为输入、输出
输入:标准输入、日志等
输出:标准输出、ES等
Logstash最简单配置/usr/local/logstash-6.6.0/config/logstash.conf
input{
stdin{}
}
output{
stdout{
codec=>rubydebug
}
}
Logstash的启动和测试
前台启动:/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf
##Successfully started Logstash API endpoint {:port=>9600} 出现这边表示允许成功,直接输入就好
后台启动:nohup /usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf --config.reload.automatic &
##--config.reload.automatic自动重新加载配置文件,无需重启logstash
测试标准输入和输出
输入:shijiange
输出:{
"message" => "shijiange",
"host" => "shijiange51",
"@timestamp" => 2019-02-24T09:24:51.921Z,
"@version" => "1"
}
Logstash读取日志/usr/local/logstash-6.6.0/config/logstash.conf
input {
file {
path => "/var/log/secure"
}
}
output{
stdout{
codec=>rubydebug
}
}
五、测试Elasticsearch 和 Logstash 来收集日志数据
(1)创建测试文件:
(在 logstash 安装目录下创建一个用于测试 logstash 使用 elasticsearch 作为 logstash 的后端的测试文件 logstash-test.conf,
该文件中定义了stdout和elasticsearch作为output,这样的“多重输出”即保证输出结果显示到屏幕上,同时也输出到elastisearch中。
前提要保证elasticsearch和logstash都正常启动(需要先启动elasticsearch,再启动logstash)
[root@logstash01 ~]# vim /usr/local/logstash-6.6.0/config/logstash-test.conf
input { stdin { } }
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://192.168.1.242:9200"] ##指定elasticsearch 地址
index => "logstash-%{+YYYY-MM}"
}
}
(2)、开启服务:
后台运行:
[root@logstash01 logstash-6.6.0]# /usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash-test.conf
实战环境
192.168.237.50: es
192.168.237.51: logstash
Logstash和ES结合说明
Logstash支持读取日志发送到ES
但Logstash用来收集日志比较重,后面将对这个进行优化
Logstash配置发送日志到ES数据库/usr/local/logstash-6.6.0/config/logstash.conf
input {
file {
path => "/usr/local/nginx/logs/access.log"
}
}
output {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
}
}
重载配置
更改output可以直接重载配置
kill -1 进程id
Logstash收集日志必要点
日志文件需要有新日志产生
Logstash跟Elasticsearch要能通讯
Kibana上查询数据
GET /logstash-2019.02.20/_search?q=*
Kibana上创建索引直接查看日志
Kibana简单查询
根据字段查询:message: "_msearch"
根据字段查询:选中查询
ELK流程
Logstash读取日志 -> ES存储数据 -> Kibana展现
发送整行日志存在的问题
整行message一般我们并不关心
需要对message进行段拆分,需要用到正则表达式
正则表达式
使用给定好的符号去表示某个含义
例如.代表任意字符
正则符号当普通符号使用需要加反斜杠
正则的发展
普通正则表达式
扩展正则表达式
普通正则表达式
. 任意一个字符
* 前面一个字符出现0次或者多次
[abc] 中括号内任意一个字符
[^abc] 非中括号内的字符
[0-9] 表示一个数字
[a-z] 小写字母
[A-Z] 大写字母
[a-zA-Z] 所有字母
[a-zA-Z0-9] 所有字母+数字
[^0-9] 非数字
^xx 以xx开头
xx$ 以xx结尾
\d 任何一个数字
\s 任何一个空白字符
扩展正则表达式,在普通正则符号再进行了扩展
? 前面字符出现0或者1次
+ 前面字符出现1或者多次
{n} 前面字符匹配n次
{a,b} 前面字符匹配a到b次
{,b} 前面字符匹配0次到b次
{a,} 前面字符匹配a或a+次
(string1|string2) string1或string2
简单提取IP
1.1.1.1 114.114.114.114 255.277.277.277
1-3个数字.1-3个数字.1-3个数字.1-3个数字
[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}
多提取
Nginx日志说明
192.168.237.1 - - [24/Feb/2019:17:48:47 +0800] "GET /shijiange HTTP/1.1" 404 571 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
访问IP地址
访问时间
请求方式(GET/POST)
请求URL
状态码
响应body大小
Referer
User Agent
Logstash正则提取日志
需要懂得正则,Logstash支持普通正则和扩展正则
需要了解Grok,利用Kibana的Grok学习Logstash正则提取日志
Grok提取Nginx日志
Grok使用(?<xxx>提取内容)来提取xxx字段
提取客户端IP: (?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})
提取时间: \[(?<requesttime>[^ ]+ \+[0-9]+)\]
Grok提取Nginx日志
(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"
提取Tomcat等日志使用类似的方法
Logstash正则提取Nginx日志
input {
file {
path => "/usr/local/nginx/logs/access.log"
}
}
filter {
grok {
match => {
"message" => '(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
}
}
}
output {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
}
}
注意正则提取失败的情况
echo "shijiange" >> /usr/local/nginx/logs/access.log
Logstash正则提取出错就不输出到ES
output{
if "_grokparsefailure" not in [tags] and "_dateparsefailure" not in [tags] {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
}
}
}
去除字段注意
只能去除_source里的
非_source里的去除不了
Logstash配置去除不需要的字段
filter {
grok {
match => {
"message" => '(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
}
remove_field => ["message","@version","path"]
}
}
去除字段
减小ES数据库的大小
提升搜索效率
默认ELK时间轴
以发送日志的时间为准
而Nginx上本身记录着用户的访问时间
分析Nginx上的日志以用户的访问时间为准,而不以发送日志的时间
Logstash分析所有Nginx日志
input {
file {
path => "/usr/local/nginx/logs/access.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
Logstash的filter里面加入配置24/Feb/2019:21:08:34 +0800
filter {
grok {
match => {
"message" => '(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
}
remove_field => ["message","@version","path"]
}
date {
match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
}
统计Nginx的请求和网页显示进行对比
cat /usr/local/nginx/logs/access.log |awk '{print $4}'|cut -b 1-19|sort |uniq -c
不同的时间格式,覆盖的时候格式要对应
20/Feb/2019:14:50:06 -> dd/MMM/yyyy:HH:mm:ss
2016-08-24 18:05:39,830 -> yyyy-MM-dd HH:mm:ss,SSS
Logstash收集日志
依赖于Java环境,用来收集日志比较重,占用内存和CPU
Filebeat相对轻量,占用服务器资源小
一般选用Filebeat来进行日志收集
Filebeat的安装
下载二进制文件
解压移到对应的目录完成安装/usr/local/
Filebeat的二进制安装
cd /usr/local/src/
tar -zxf filebeat-6.6.0-linux-x86_64.tar.gz
mv filebeat-6.6.0-linux-x86_64 /usr/local/filebeat-6.6.0
部署服务介绍
192.168.237.50: Kibana、ES
192.168.237.51: Filebeat
Filebeat发送日志到ES配置/usr/local/filebeat-6.6.0/filebeat.yml
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.log
output:
elasticsearch:
hosts: ["192.168.237.50:9200"]
启动Filebeat
前台启动: /usr/local/filebeat-6.6.0/filebeat -e -c /usr/local/filebeat-6.6.0/filebeat.yml
后台启动:nohup /usr/local/filebeat-6.6.0/filebeat -e -c /usr/local/filebeat-6.6.0/filebeat.yml >/tmp/filebeat.log 2>&1 &
Kibana上查看日志数据
GET /xxx/_search?q=*
创建索引观察
Filebeat -> ES -> Kibana
适合查看日志
不适合具体日志的分析
Filebeat和Logstash说明
Filebeat:轻量级,但不支持正则、不能移除字段等
Logstash:比较重,但支持正则、支持移除字段等
搭建架构演示
Logstash -> Elasticsearch -> Kibana
Filebeat -> Elasticsearch -> Kibana
Filebeat -> Logstash -> Elasticsearch -> Kibana
部署服务介绍
192.168.237.50: Kibana、ES
192.168.237.51: Logstash、Filebeat
Filebeat配置发往Logstash
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.log
output:
logstash:
hosts: ["192.168.237.51:5044"]
Logstash配置监听在5044端口,接收Filebeat发送过来的日志
input {
beats {
host => '0.0.0.0'
port => 5044
}
}
Kibana上查看数据
GET /xxx/_search?q=*
创建索引查看数据
Logstash上移除不必要的字段
Filebeat发过来的无用字段比较多
remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
Filebeat批量部署比Logstash要方便得多
Logstash监听在内网
Filebeat发送给内网的Logstash
新架构
Filebeat(多台)
Filebeat(多台) -> Logstash(正则) -> Elasticsearch(入库) -> Kibana展现
Filebeat(多台)
Json的好处
原生日志需要做正则匹配,比较麻烦
Json格式的日志不需要正则能直接分段采集
Nginx使用Json格式日志
log_format json '{"@timestamp":"$time_iso8601",'
'"clientip":"$remote_addr",'
'"status":$status,'
'"bodysize":$body_bytes_sent,'
'"referer":"$http_referer",'
'"ua":"$http_user_agent",'
'"handletime":$request_time,'
'"url":"$uri"}';
access_log logs/access.log;
access_log logs/access.json.log json;
部署服务介绍
192.168.237.50: Kibana、ES
192.168.237.51: Logstash、Filebeat
Filebeat采集Json格式的日志
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.json.log
output:
logstash:
hosts: ["192.168.237.51:5044"]
Logstash正则提取的配置备份
filter {
grok {
match => {
"message" => '(?<clientip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - - \[(?<requesttime>[^ ]+ \+[0-9]+)\] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/\d.\d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
}
remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
}
date {
match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
}
Logstash解析Json日志
input {
beats {
host => '0.0.0.0'
port => 5044
}
}
filter {
json { source => "message" remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"] }
}
output {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
}
}
采集多个日志
收集单个Nginx日志
如果有采集多个日志的需求
Filebeat采集多个日志配置
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.json.log
fields:
type: access
fields_under_root: true
- type: log
tail_files: true
backoff: "1s"
paths:
- /var/log/secure
fields:
type: secure
fields_under_root: true
output:
logstash:
hosts: ["192.168.237.51:5044"]
Logstash如何判断两个日志
Filebeat加入一字段用来区别
Logstash使用区别字段来区分
Logstash通过type字段进行判断
input {
beats {
host => '0.0.0.0'
port => 5044
}
}
filter {
if [type] == "access" {
json {
source => "message"
remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
}
}
}
output{
if [type] == "access" {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
index => "access-%{+YYYY.MM.dd}"
}
}
else if [type] == "secure" {
elasticsearch {
hosts => ["http://192.168.237.50:9200"]
index => "secure-%{+YYYY.MM.dd}"
}
}
}
网页上建立索引
access索引
secure索引
之前架构
Filebeat(多台)
Filebeat(多台) -> Logstash(正则) -> Elasticsearch(入库) -> Kibana展现
Filebeat(多台)
架构存在的问题
Logstash性能不足的时候
扩容Logstash,Filebeat的配置可能会不一致
架构优化
Filebeat(多台) Logstash
Filebeat(多台) -> Redis、Kafka -> Logstash(正则) -> Elasticsearch(入库) -> Kibana展现
Filebeat(多台) Logstash
部署服务介绍
192.168.237.50: Kibana、ES
192.168.237.51: Logstash、Filebeat、Redis
Redis服务器搭建
yum install -y wget net-tools gcc gcc-c++ make tar openssl openssl-devel cmake
cd /usr/local/src
wget 'http://download.redis.io/releases/redis-4.0.9.tar.gz'
tar -zxf redis-4.0.9.tar.gz
cd redis-4.0.9
make
mkdir -pv /usr/local/redis/conf /usr/local/redis/bin
cp src/redis* /usr/local/redis/bin/
cp redis.conf /usr/local/redis/conf
验证Redis服务器
更改Redis配置(daemon、dir、requirepass)
密码设置为shijiange
验证set、get操作
Redis的启动命令
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf
Redis的简单操作
/usr/local/redis/bin/redis-cli
auth 'shijiange'
set name shijiange
get name
部署服务介绍
192.168.237.50: Kibana、ES
192.168.237.51: Logstash、Filebeat、Redis
Filebeat配置写入到Redis
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.json.log
fields:
type: access
fields_under_root: true
output:
redis:
hosts: ["192.168.237.51"]
port: 6379
password: 'shijiange'
key: 'access'
Logstash从Redis中读取数据
input {
redis {
host => '192.168.237.51'
port => 6379
key => "access"
data_type => "list"
password => 'shijiange'
}
}
架构优化
Filebeat(多台) Logstash
Filebeat(多台) -> Redis、Kafka -> Logstash(正则) -> Elasticsearch(入库) -> Kibana展现
Filebeat(多台) Logstash
实战环境
192.168.237.51: Logstash、Kafka、Filebeat
Kafka
Kafka依赖于Zookkeeper
两个都依赖于Java
Kafka依赖于Zookeeper
官方网站:https://zookeeper.apache.org/
下载ZK的二进制包
解压到对应目录完成安装
ZK的安装命令
tar -zxf zookeeper-3.4.13.tar.gz
mv zookeeper-3.4.13 /usr/local/
cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.13/conf/zoo.cfg
ZK的启动
更改配置:clientPortAddress=0.0.0.0
启动:/usr/local/zookeeper-3.4.13/bin/zkServer.sh start
Kafka下载地址
Kafka官网:http://kafka.apache.org/
下载Kafka的二进制包
解压到对应目录完成安装
Kafka的安装命令
cd /usr/local/src/
tar -zxf kafka_2.11-2.1.1.tgz
mv kafka_2.11-2.1.1 /usr/local/kafka_2.11
Kafka的启动
更改kafka的配置:更改监听地址、更改连接zk的地址
前台启动:/usr/local/kafka_2.11/bin/kafka-server-start.sh /usr/local/kafka_2.11/config/server.properties
启动kafka:nohup /usr/local/kafka_2.11/bin/kafka-server-start.sh /usr/local/kafka_2.11/config/server.properties >/tmp/kafka.log 2>&1 &
Filebeat日志发送到Kafka
filebeat.inputs:
- type: log
tail_files: true
backoff: "1s"
paths:
- /usr/local/nginx/logs/access.json.log
fields:
type: access
fields_under_root: true
output:
kafka:
hosts: ["192.168.237.51:9092"]
topic: shijiange
Logstash读取Kafka
input {
kafka {
bootstrap_servers => "192.168.237.51:9092"
topics => ["shijiange"]
group_id => "shijiange"
codec => "json"
}
}
filter {
if [type] == "access" {
json {
source => "message"
remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
}
}
}
output {
stdout {
codec=>rubydebug
}
}
Kafka查看队列信息
查看Group: ./kafka-consumer-groups.sh --bootstrap-server 192.168.237.51:9092 --list
查看队列:./kafka-consumer-groups.sh --bootstrap-server 192.168.237.51:9092 --group shijiange --describe
本文版权归 飞翔沫沫情 作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出 原文链接 如有问题, 可发送邮件咨询,转贴请注明出处:https://www.fxkjnj.com/278/