At Commando.io, we’ve always wanted a web interface to allow us to grep and filter through our nginx access logs in a friendly manner. After researching a bit, we decided to go with LogStash and use Kibana as the web front-end for ElasticSearch.
LogStash is a free and open source tool for managing events and logs. You can use it to collect logs, parse them, and store them for later.
First, let’s setup our centralized log server. This server will listen for events using Redis as a broker and send the events to ElasticSearch.
The following guide assumes that you are running CentOS 6.4 x64.
Centralized Log Server
cd $HOME
# Get ElasticSearch 0.9.1, add as a service, and autostart sudo yum -y install java-1.7.0-openjdk wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.1.zip unzip elasticsearch-0.90.1.zip rm -rf elasticsearch-0.90.1.zip mv elasticsearch-0.90.1 elasticsearch sudo mv elasticsearch /usr/local/share cd /usr/local/share sudo chmod 755 elasticsearch cd $HOME curl -L http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz sudo mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/ rm -Rf *servicewrapper* sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install sudo service elasticsearch start sudo chkconfig elasticsearch on
# Add the required prerequisite remi yum repository sudo rpm —import http://rpms.famillecollet.com/RPM-GPG-KEY-remi sudo rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm sed -i ‘0,/enabled=0/s//enabled=1/’ /etc/yum.repos.d/remi.repo
# Install Redis and autostart sudo yum -y install redis sudo service redis start sudo chkconfig redis on
# Install LogStash wget http://logstash.objects.dreamhost.com/release/logstash-1.1.13-flatjar.jar sudo mkdir —-parents /usr/local/bin/logstash sudo mv logstash-1.1.13-flatjar.jar /usr/local/bin/logstash/logstash.jar
# Create LogStash configuration file cd /etc sudo touch logstash.conf
Use the following LogStash configuration for the centralized server:
# Contents of /etc/logstash.conf
input { redis { host => “127.0.0.1" port => 6379 type => “redis-input” data_type => “list” key => “logstash” format => “json_event” } }
output { elasticsearch { host => “127.0.0.1" } }
Finally, let’s start LogStash on the centralized server:
/usr/bin/java -jar /usr/local/bin/logstash/logstash.jar agent —config /etc/logstash.conf -w 1
In production, you’ll most likely want to setup a service for LogStash instead of starting it manually each time. The following init.d service script should do the trick (it is what we use).
Woo Hoo, if you’ve made it this far, give yourself a big round of applause. Maybe grab a frosty adult beverage.
Now, let’s setup each nginx web server.
Nginx Servers
cd $HOME
# Install Java sudo yum -y install java-1.7.0-openjdk
# Install LogStash wget http://logstash.objects.dreamhost.com/release/logstash-1.1.13-flatjar.jar sudo mkdir —-parents /usr/local/bin/logstash sudo mv logstash-1.1.13-flatjar.jar /usr/local/bin/logstash/logstash.jar
# Create LogStash configuration file cd /etc sudo touch logstash.conf
Use the following LogStash configuration for each nginx server:
# Contents of /etc/logstash.conf
input { file { type => “nginx_access” path => [“/var/log/nginx/**”] exclude => [“*.gz”, “error.*”] discover_interval => 10 } } filter { grok { type => nginx_access pattern => “%{COMBINEDAPACHELOG}” } } output { redis { host => “hostname-of-centralized-log-server” data_type => “list” key => “logstash” } }
Start LogStash on each nginx server:
/usr/bin/java -jar /usr/local/bin/logstash/logstash.jar agent —config /etc/logstash.conf -w 2
Kibana - A Beautiful Web Interface
At this point, you’ve got your nginx web servers shipping their access logs to a centralized log server via Redis. The centralized log server is churning away, processing the events from Redis and storing them into ElasticSearch.
All that is left is to setup a web interface to interact with the data in ElasticSearch. The clear choice for this is Kibana. Even though LogStash comes with its own web interface, it is highly recommended to use Kibana instead. In-fact, the folks that maintain LogStash recommend Kibana and are going to be deprecating their web interface in the near future. Moral of the story… Use Kibana.
On your centralized log server, get and install Kibana.
cd $HOME
# Install Ruby yum -y install ruby
# Install Kibana wget https://github.com/rashidkpc/Kibana/archive/v0.2.0.zip unzip v0.2.0 rm -rf v0.2.0 sudo mv Kibana-0.2.0 /srv/kibana
# Edit Kibana configuration file cd /srv/kibana sudo nano KibanaConfig.rb # Set Elasticsearch = “localhost:9200" sudo gem install bundler sudo bundle install
# Start Kibana ruby kibana.rb
Simply open up your browser and navigate to http://hostname-of-centralized-log-server:5601 and you should see the Kibana interface load right up.
Lastly, just like for ElasticSearch, you’ll probably want Kibana to run as a service and autostart. Again, here is our init.d service script that we use.
Congratulations, your now shipping your nginx access logs like a boss to ElasticSearch and using the Kibana web interface to grep and filter them.
Interested in automating this entire install ofElasticSearch, Redis,LogStash and Kibana on your infrastructure? We can help! Commando.io is a web based interface for managing servers and running remote executions over SSH. Request a beta invite today, and start managing servers easily online.
相关推荐
相较于 Logstash,Filebeat 更加节省资源,启动快速,因此在实时收集 Nginx 日志等场景下,Filebeat 成为了首选。Logstash 虽然功能强大,但其内存消耗较大,如果服务器资源有限,可能会对其他服务造成影响。 在 ...
7-logstash收集nginx访问日志-json.flv 8-logstash收集syslog日志.flv 9-logstash收集tcp日志.flv 10-logstash收集slowlog-grok.flv 11-logstash解耦之消息队列.flv 12-kibana实践.flv 13-elk上线流程.flv
本资源结合我的博客一并使用,用于解决filebeat收集nginx日志用的
### Logstash收集Tomcat集群日志的解决方案 #### 背景介绍 随着企业规模的不断扩大,业务系统逐渐复杂化,对于系统运维人员而言,如何有效地监控和管理大量的日志数据变得至关重要。尤其是在Web应用程序中,例如...
在这个场景中,我们将搭建 ELK Stack 6.2.4 版本,特别关注如何在没有 Filebeat 的情况下,通过 Logstash 直接收集 Nginx 日志,并利用 Redis 作为中间缓存,最终将日志数据写入 Elasticsearch 分析和绘图。...
确保Filebeat正在收集Nginx日志,Redis接收并存储数据,Logstash从Redis读取数据并写入Elasticsearch,最后Kibana能够展示和分析收集到的日志数据。 在实际环境中,ELK Stack可以用于监控系统日志、应用日志、安全...
此外,可能还会用到如logstash、ELK(Elasticsearch、Logstash、Kibana)栈等工具,它们可以实现更复杂、更实时的日志收集和分析。 从压缩包子文件的文件名称"pengzuyun-logParse-cc8551f"来看,这可能是一个名为...
- **logback+kafka**: 日志通过logback生成后,经kafka消息队列传递到Logstash,Logstash再将处理后的日志存储到Elasticsearch,最后通过Kibana进行分析。 - **Logstash直接读取日志文件**:Logstash直接从日志文件...
3. **配置日志收集**:"使用logstash配置nginx和tomcat日志统一收集到一台日志服务器 - u013619834的专栏 - CSDN博客.url"提供了关于如何为Nginx和Tomcat日志定制Logstash配置的链接,这些配置文件确保了不同应用...
Nginx日志通常包含了丰富的信息,如访问者IP、请求时间、请求类型等,通过Elasticsearch可以方便地对这些数据进行实时分析。 首先,我们来看Nginx日志的格式化。Nginx支持自定义日志格式,例如`log_format`命令用于...
在这个例子中,"web应用服务器日志" 提示我们可以用Logstash来接收来自Web服务器(如Apache或Nginx)的访问日志,或者其他与Web应用相关的日志数据。 2. 过滤(Filters):一旦数据被输入,Logstash允许用户通过...
对于Nginx日志的分析,首先需要正确配置Logstash,以便它能够正确解析Nginx产生的日志格式。在Logstash的配置文件中,可以使用Grok插件来解析日志。Grok是一个基于正则表达式的日志解析工具,它可以将非结构化的日志...
- 数据收集:Logstash 可以从各种来源(如系统日志、网络设备、数据库等)收集数据。 - 数据解析:它支持多种输入插件,能够解析不同格式的日志数据,如 JSON、CSV、Apache、Nginx 等。 - 数据转换:通过中间过滤...
- **EFLK收集Nginx日志应用实战**:Nginx是一个流行的Web服务器,其日志记录了HTTP请求的详细信息。课程将讲解如何使用EFLK处理Nginx日志,以便优化Web服务性能和排查问题。 - **EFLK收集HTTPD日志应用实战**:类似...
7. **Nginx日志分析** 日志分析是监控服务器状态和优化性能的关键步骤。Nginx的日志格式灵活,可以自定义。实验会讲解如何配置日志记录,以及如何使用工具(如Logstash、Grok、ELK stack等)对日志数据进行收集、...
这里,我们从Nginx日志中读取`remote_addr`字段,将其作为输入IP地址。`database`参数指定了GeoIP数据库的路径。`fields`参数指定我们关心的地理位置信息,包括国家、地区、城市和经纬度坐标。 输出部分,我们向...
最后,为了实现筛选功能,可以在layui的表格组件中添加筛选条件,通过发送带参数的Ajax请求,后端根据这些参数对日志数据进行过滤。例如,筛选特定日期范围的日志: ```javascript // 添加筛选元素 var form = ...
同时,你可以利用第三方工具(如Logstash、ELK Stack)对日志进行收集、分析和可视化监控。 以上就是关于Nginx安装和使用的一些核心知识点。了解并掌握这些,你就能顺利地在你的系统上部署和管理Nginx服务器,为你...