본문으로 바로가기

logstash를 이용해서 MySQL 데이터를 elasticsearch로 입력하기

category Back-end 2019.03.14 00:25

docker 설치가 되어있는 상태로 진행

Logstash 폴더 생성

cd /home/madosa/app
mkdir logstash
cd logstash
mkdir config data pipeline

설정 파일 생성

cd config
vi jvm.options

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 ## JVM configuration # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms1g -Xmx1g ################################################################ ## Expert settings ################################################################ ## ## All settings below this section are considered ## expert settings. Don't tamper with them unless ## you understand what you are doing ## ################################################################ ## GC configuration -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly ## Locale # Set the locale language #-Duser.language=en # Set the locale country #-Duser.country=US # Set the locale variant, if any #-Duser.variant= ## basic # set the I/O temp directory #-Djava.io.tmpdir=$HOME # set to headless, just in case -Djava.awt.headless=true # ensure UTF-8 encoding by default (e.g. filenames) -Dfile.encoding=UTF-8 # use our provided JNA always versus the system one #-Djna.nosys=true # Turn on JRuby invokedynamic -Djruby.compile.invokedynamic=true # Force Compilation -Djruby.jit.threshold=0 ## heap dumps # generate a heap dump when an allocation from the Java heap fails # heap dumps are created in the working directory of the JVM -XX:+HeapDumpOnOutOfMemoryError # specify an alternative path for heap dumps # ensure the directory exists and has sufficient space #-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof ## GC logging #-XX:+PrintGCDetails #-XX:+PrintGCTimeStamps #-XX:+PrintGCDateStamps #-XX:+PrintClassHistogram #-XX:+PrintTenuringDistribution #-XX:+PrintGCApplicationStoppedTime # log GC status to a file with time stamps # ensure the directory exists #-Xloggc:${LS_GC_LOG_FILE} # Entropy source for randomness -Djava.security.egd=file:/dev/urandom

vi log4j2.properties

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 status = error name = LogstashPropertiesConfig appender.console.type = Console appender.console.name = plain_console appender.console.layout.type = PatternLayout appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n appender.json_console.type = Console appender.json_console.name = json_console appender.json_console.layout.type = JSONLayout appender.json_console.layout.compact = true appender.json_console.layout.eventEol = true rootLogger.level = ${sys:ls.log.level} rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console

vi logstash.yml

1 http.host: "0.0.0.0"

vi pipelines.yml

1 2 3 4 5 6 # This file is where you define your pipelines. You can define multiple. # For more information on multiple pipelines, see the documentation: # https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html - pipeline.id: main path.config: "/usr/share/logstash/pipeline"

vi startup.options

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ################################################################################ # These settings are ONLY used by $LS_HOME/bin/system-install to create a custom # startup script for Logstash and is not used by Logstash itself. It should # automagically use the init system (systemd, upstart, sysv, etc.) that your # Linux distribution uses. # # After changing anything here, you need to re-run $LS_HOME/bin/system-install # as root to push the changes to the init script. ################################################################################ # Override Java location #JAVACMD=/usr/bin/java # Set a home directory LS_HOME=/usr/share/logstash # logstash settings directory, the path which contains logstash.yml LS_SETTINGS_DIR="${LS_HOME}/config" # This file is where you define your pipelines. You can define multiple. # Arguments to pass to logstash LS_OPTS="--path.settings ${LS_SETTINGS_DIR}" # Arguments to pass to java LS_JAVA_OPTS="" # pidfiles aren't used the same way for upstart and systemd; this is for sysv users. LS_PIDFILE=/var/run/logstash.pid # user and group id to be invoked as LS_USER=logstash LS_GROUP=logstash # Enable GC logging by uncommenting the appropriate lines in the GC logging # section in jvm.options LS_GC_LOG_FILE=/var/log/logstash/gc.log # Open file limit LS_OPEN_FILES=16384 # Nice level LS_NICE=19 # Change these to have the init script named and described differently # This is useful when running multiple instances of Logstash on the same # physical box or vm SERVICE_NAME="logstash" SERVICE_DESCRIPTION="logstash" # If you need to run a command or script before launching Logstash, put it # between the lines beginning with `read` and `EOM`, and uncomment those lines. ### ## read -r -d '' PRESTART << EOM ## EOM

mysql connector를 다음 폴더에 복사

mkdir drivers

cp mysql-connector-java-5.1.18-bin.jar drivers

pipeline 설정

cd ../pipeline

vi logstash.conf

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 input { jdbc { jdbc_driver_library => "/usr/share/logstash/config/drivers/mysql-connector-java-5.1.18-bin.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://ip:43306/schema" jdbc_user => "db 유저 입력" jdbc_password => "Password 입력" lowercase_column_names => false statement => "SELECT userId, deptId, deptName, userName FROM user" } } # The filter part of this file is commented out to indicate that it is optional. #filter { # #} output{ elasticsearch { hosts => ["ip:9200"] index => "schema" document_type => "user" document_id => "%{id}" codec => json_lines } stdout { # codec => rubydebug codec => json_lines } }

폴더 및 파일 권한 설정

cd /home/madosa/app/logstash

sudo chmod -R g+w ./*

sudo chgrp -R 1000 ./*

Logstash Container 생성

sudo docker create --add-host=host-name:ip -v /home/madosa/app/logstash/pipeline:/usr/share/logstash/pipeline -v /home/madosa/app/logstash/config:/usr/share/logstash/config -v /home/madosa/app/logstash/data:/usr/share/logstash/data --name logstash_oss docker.elastic.co/logstash/logstash-oss:6.2.4

Logstash Container 실행

sudo docker start logstash_oss

실행 로그 확인

sudo docker logs -f logstash_oss



댓글을 달아 주세요

대마도사 블로그
블로그 이미지 대마도사 님의 블로그
MENU
VISITOR 오늘19 / 전체11,643