redis - install

DB/Redis 2013. 10. 4. 15:00


Redis Quick Start

This is a quick start document that targets people without prior experience with Redis. Reading this document will help you:

  • Download and compile Redis to start hacking.
  • Use redis-cli to access the server.
  • Use Redis from your application.
  • Understand how Redis persistence works.
  • Install Redis more properly.
  • Find out what to read next to understand more about Redis.

Installing Redis

The suggested way of installing Redis is compiling it from sources as Redis has no dependencies other than a working GCC compiler and libc. Installing it using the package manager of your Linux distribution is somewhat discouraged as usually the available version is not the latest.

You can either download the latest Redis tar ball from the redis.io web site, or you can alternatively use this special URL that always points to the latest stable Redis version, that is, http://download.redis.io/redis-stable.tar.gz.

In order to compile Redis follow this simple steps:

wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make

At this point you can try if your build works correctly typing make test, but this is an optional step. After the compilation the src directory inside the Redis distribution is populated with the different executables that are part of Redis:

  • redis-server is the Redis Server itself.
  • redis-cli is the command line interface utility to talk with Redis.
  • redis-benchmark is used to check Redis performances.
  • redis-check-aof and redis-check-dump are useful in the rare event of corrupted data files.

It is a good idea to copy both the Redis server than the command line interface in proper places using the following commands:

  • sudo cp redis-server /usr/local/bin/
  • sudo cp redis-cli /usr/local/bin/

In the following documentation I assume that /usr/local/bin is in your PATH environment variable so you can execute both the binaries without specifying the full path.

Starting Redis

The simplest way to start the Redis server is just executing the redis-server binary without any argument.

$ redis-server
[28550] 01 Aug 19:29:28 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
[28550] 01 Aug 19:29:28 * Server started, Redis version 2.2.12
[28550] 01 Aug 19:29:28 * The server is now ready to accept connections on port 6379
... and so forth ...

In the above example Redis was started without any explicit configuration file, so all the parameters will use the internal default. This is perfectly fine if you are starting Redis just to play a bit with it or for development, but for production environments you should use a configuration file.

To start Redis with a configuration file just give the full path of the configuration file to use as the only Redis argument, for instance:redis-server /etc/redis.conf. You can use the redis.conf file included in the root directory of the Redis source code distribution as a template to write your configuration file.

Check if Redis is working

External programs talk to Redis using a TCP socket and a Redis specific protocol. This protocol is implemented in the Redis client libraries for the different programming languages. However to make hacking with Redis simpler Redis provides a command line utility that can be used to send commands to Redis. This program is called redis-cli.

The first thing to do in order to check if Redis is working properly is sending a PING command using redis-cli:

$ redis-cli ping
PONG

Running redis-cli followed by a command name and its arguments will send this command to the Redis instance running on localhost at port 6379. You can change the host and port used by redis-cli, just try the --help option to check the usage information.

Another interesting way to run redis-cli is without arguments: the program will start into an interactive mode where you can type different commands:

$ redis-cli                                                                
redis 127.0.0.1:6379> ping
PONG
redis 127.0.0.1:6379> set mykey somevalue
OK
redis 127.0.0.1:6379> get mykey
"somevalue"

At this point you can talk with Redis. It is the right time to pause a bit with this tutorial and start the fifteen minutes introduction to Redis data types in order to learn a few Redis commands. Otherwise if you already know a few basic Redis commands you can keep reading.

Using Redis from your application

Of course using Redis just from the command line interface is not enough as the goal is to use it from your application. In order to do so you need to download and install a Redis client library for your programming language. You'll find a full list of clients for different languages in this page.

For instance if you happen to use the Ruby programming language our best advice is to use the Redis-rb client. You can install it using the command gem install redis (also make sure to install the SystemTimer gem as well).

These instructions are Ruby specific but actually many library clients for popular languages look quite similar: you create a Redis object and execute commands calling methods. A short interactive example using Ruby:

>> require 'rubygems'
=> false
>> require 'redis'
=> true
>> r = Redis.new
=> #<Redis client v2.2.1 connected to redis://127.0.0.1:6379/0 (Redis v2.3.8)>
>> r.ping
=> "PONG"
>> r.set('foo','bar')
=> "OK"
>> r.get('foo')
=> "bar"

Redis persistence

You can learn how Redis persisence works in this page, however what is important to understand for a quick start is that by default, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time (for instance after at least five minutes if you have at least 100 changes in your data), so if you want your database to persist and be reloaded after a restart make sure to call the SAVE command manually every time you want to force a data set snapshot. Otherwise make sure to shutdown the database using the SHUTDOWN command:

$ redis-cli shutdown

This way Redis will make sure to save the data on disk before quitting. Reading the persistence page is strongly suggested in order to better understand how Redis persistence works.

Installing Redis more properly

Running Redis from the command line is fine just to hack a bit with it or for development. However at some point you'll have some actual application to run on a real server. For this kind of usage you have two different choices:

  • Run Redis using screen.
  • Install Redis in your Linux box in a proper way using an init script, so that after a restart everything will start again properly.

A proper install using an init script is strongly suggested. The following instructions can be used to perform a proper installation using the init script shipped with Redis 2.4 in a Debian or Ubuntu based distribution.

We assume you already copied redis-server and redis-cli executables under /usr/local/bin.

  • Create a directory where to store your Redis config files and your data:

    sudo mkdir /etc/redis
    sudo mkdir /var/redis
    
  • Copy the init script that you'll find in the Redis distribution under the utils directory into /etc/init.d. We suggest calling it with the name of the port where you are running this instance of Redis. For example:

    sudo cp utils/redis_init_script /etc/init.d/redis_6379
    
  • Edit the init script.

    sudo vi /etc/init.d/redis_6379
    

Make sure to modify REDIS_PORT accordingly to the port you are using. Both the pid file path and the configuration file name depend on the port number.

  • Copy the template configuration file you'll find in the root directory of the Redis distribution into /etc/redis/ using the port number as name, for instance:

    sudo cp redis.conf /etc/redis/6379.conf
    
  • Create a directory inside /var/redis that will work as data and working directory for this Redis instance:

    sudo mkdir /var/redis/6379
    
  • Edit the configuration file, making sure to perform the following changes:

    • Set daemonize to yes (by default it is set to no).
    • Set the pidfile to /var/run/redis_6379.pid (modify the port if needed).
    • Change the port accordingly. In our example it is not needed as the default port is already 6379.
    • Set your preferred loglevel.
    • Set the logfile to /var/log/redis_6379.log
    • Set the dir to /var/redis/6379 (very important step!)
  • Finally add the new Redis init script to all the default runlevels using the following command:

    sudo update-rc.d redis_6379 defaults
    

You are done! Now you can try running your instance with:

/etc/init.d/redis_6379 start

Make sure that everything is working as expected:

  • Try pinging your instance with redis-cli.
  • Do a test save with redis-cli save and check that the dump file is correctly stored into /var/redis/6379/ (you should find a file called dump.rdb).
  • Check that your Redis instance is correctly logging in the log file.
  • If it's a new machine where you can try it without problems make sure that after a reboot everything is still working.

Note: in the above instructions we skipped many Redis configurations parameters that you would like to change, for instance in order to use AOF persistence instead of RDB persistence, or to setup replication, and so forth. Make sure to read the redis.conf file (that is heavily commented) and the other documentation you can find in this web site for more information.



source - http://redis.io/topics/quickstart





Installation

Download, extract and compile Redis with:

$ wget http://download.redis.io/releases/redis-2.6.16.tar.gz
$ tar xzf redis-2.6.16.tar.gz
$ cd redis-2.6.16
$ make

The binaries that are now compiled are available in the src directory. Run Redis with:

$ src/redis-server

You can interact with Redis using the built-in client:

$ src/redis-cli
redis> set foo bar
OK
redis> get foo
"bar"

Are you new to Redis? Try our online, interactive tutorial.

Where's Redis Cluster?

Redis Cluster, the distributed version of Redis, is making a lot of progresses and will be released as beta at the start of Q3 2013, and in a stable release before the end of 2013. You can watch a video about what Redis Cluster can currently do. The source code of Redis Cluster is publicly available in the unstable branch, check the cluster.c source code.


source - http://redis.io/download






말이 필요없는 인기쟁이 key-value 스토어 redis를 설치해봅시다


다운받고, 압축풀고 make, make install 간단합니다.



$ cd /usr/local/src/
$ wget http://download.redis.io/releases/redis-2.8.5.tar.gz
$ tar xzf redis-2.8.5.tar.gz
$ cd redis-2.8.5
$ make
$ make install



레디스 실행

$ src/redis-server




레디스의 구동을 성공했는데 어라 근데 이건 무슨 warning 일까요.


WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.


물리적인 메모리를 다 사용했을때 어떻게 할것인지에 대한 옵션 설정을 하라는 경고메시지인데요.


자세한 내용은 여기를 참고하세요.


http://charsyam.wordpress.com/2013/01/24/%EC%9E%85-%EA%B0%9C%EB%B0%9C-redis-vm-overcommit_memory-%EC%9D%84-%EC%95%84%EC%8B%9C%EB%82%98%EC%9A%94/


레디스가 사용할수 있는 max memory를 나중에 conf 파일 수정을 통해서 변경할수 있는데요.


응답속도가 중요한 용도로 redis를 운영한다면 


이 값을 물리메모리보다 작게 설정해야만 메모리의 기능만 100% 사용해서 빠른 응답속도를 기대할수 있습니다.


물리메모리보다 크게 설정하면 swap 기능이 자동으로 사용되어서 메모리만 사용할때의 빠른 응답속도를 기대할수 없습니다.





redis를 간단한 테스트용으로 사용한다면 make install 명령어를 끝으로 설치를 종료하면 되지만


production용으로 사용하기 위한 각종 configuration 파일을 적절한 위치에 생성하기 위해서는 


redis에서 이미 만들어놓은 스크립트를 한번 실행시켜줘야 합니다.


utils 폴더에 install_server.sh 파일을 실행해주세요

$ cd utils
$ ./install_server.sh


몇가지 질문을 하는데 그냥 엔터만 누르시면 기본값으로 설정 됩니다. 


포트 질문에 기본으로 6379 쓰면 그냥 엔터
설정파일 위치 질문에 /etc/redis/6379.conf 그냥 엔터
로그파일 질문에 /var/log/redis_6379.log 그냥 엔터
데이터 디렉토리에 /var/lib/redis/6379 그냥 엔터
레디스가 설치된 디렉토리를 묻는 질문에도 그냥 엔터


이제 백그라운드데몬으로서 적절하게 레디스를 운영할수 있는 필요한 모든셋업을 마쳤습니다.


그런데 혹시 스크립트를 실행하면서 다음과 같은 오류가 발생한다면(CentOS에서 해당 오류가 있는것 같습니다)


./install_server.sh: line 178: update-rc.d: command not found


install_server.sh 파일을 열어서

162줄과 176줄


if [ !`which chkconfig` ] ; then 


느낌표 옆에 공백을 하나 추가해서

if [ ! `which chkconfig` ] ; then 

이렇게 저장한 뒤 다시 실행하면 문제가 해결됩니다


이제 앞으로 레디스 데몬을 실행하는 명령어는 이렇게


$ /etc/init.d/redis_6379 start
$ /etc/init.d/redis_6379 stop


6379번 포트에 설치하지 않았다면 6379 대신에 포트명을 적어주시면 됩니다.




콘솔에서 사용할때는 이렇게

$ src/redis-cli
redis> set foo bar
OK
redis> get foo
"bar"


간단히 테스트 해볼수 있습니다.



오픈된 환경에 redis가 위치해있을 경우에는


설정파일을 열어

$ vi /etc/redis/6379.conf


requirepass에 주석지우고 비밀번호 설정하면


비밀번호를 통한 인증을 한 뒤에 redis의 이용이 가능합니다.



레디스 설치 끝~



출처 - http://trend21c.tistory.com/1645






* redis install

# wget http://download.redis.io/redis-stable.tar.gz

# tar xvzf redis-stable.tar.gz

# cd redis-stable

# make


cd src && make all

make[1]: Entering directory `/usr/local/src/redis-stable/src'

rm -rf redis-server redis-sentinel redis-cli redis-benchmark redis-check-dump redis-check-aof *.o *.gcda *.gcno *.gcov redis.info lcov-html

(cd ../deps && make distclean)

make[2]: Entering directory `/usr/local/src/redis-stable/deps'

(cd hiredis && make clean) > /dev/null || true

(cd linenoise && make clean) > /dev/null || true

(cd lua && make clean) > /dev/null || true

(cd jemalloc && [ -f Makefile ] && make distclean) > /dev/null || true

(rm -f .make-*)

make[2]: Leaving directory `/usr/local/src/redis-stable/deps'

(rm -f .make-*)

echo STD=-std=c99 -pedantic >> .make-settings

echo WARN=-Wall >> .make-settings

echo OPT=-O2 >> .make-settings

echo MALLOC=jemalloc >> .make-settings

echo CFLAGS= >> .make-settings

echo LDFLAGS= >> .make-settings

echo REDIS_CFLAGS= >> .make-settings

echo REDIS_LDFLAGS= >> .make-settings

echo PREV_FINAL_CFLAGS=-std=c99 -pedantic -Wall -O2 -g -rdynamic -ggdb   -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src -DUSE_JEMALLOC -I../deps/jemalloc/include >> .make-settings

echo PREV_FINAL_LDFLAGS=  -g -rdynamic -ggdb >> .make-settings

(cd ../deps && make hiredis linenoise lua jemalloc)

make[2]: Entering directory `/usr/local/src/redis-stable/deps'

(cd hiredis && make clean) > /dev/null || true

(cd linenoise && make clean) > /dev/null || true

(cd lua && make clean) > /dev/null || true

(cd jemalloc && [ -f Makefile ] && make distclean) > /dev/null || true

(rm -f .make-*)

(echo "" > .make-cflags)

(echo "" > .make-ldflags)

MAKE hiredis

cd hiredis && make static

make[3]: Entering directory `/usr/local/src/redis-stable/deps/hiredis'

cc -std=c99 -pedantic -c -O3 -fPIC  -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb  net.c

cc -std=c99 -pedantic -c -O3 -fPIC  -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb  hiredis.c

cc -std=c99 -pedantic -c -O3 -fPIC  -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb  sds.c

cc -std=c99 -pedantic -c -O3 -fPIC  -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb  async.c

ar rcs libhiredis.a net.o hiredis.o sds.o async.o

make[3]: Leaving directory `/usr/local/src/redis-stable/deps/hiredis'

MAKE linenoise

cd linenoise && make

make[3]: Entering directory `/usr/local/src/redis-stable/deps/linenoise'

cc  -Wall -Os -g  -c linenoise.c

make[3]: Leaving directory `/usr/local/src/redis-stable/deps/linenoise'

MAKE lua

cd lua/src && make all CFLAGS="-O2 -Wall -DLUA_ANSI " MYLDFLAGS=""

make[3]: Entering directory `/usr/local/src/redis-stable/deps/lua/src'

gcc -O2 -Wall -DLUA_ANSI    -c -o lapi.o lapi.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lcode.o lcode.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ldebug.o ldebug.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ldo.o ldo.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ldump.o ldump.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lfunc.o lfunc.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lgc.o lgc.c

gcc -O2 -Wall -DLUA_ANSI    -c -o llex.o llex.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lmem.o lmem.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lobject.o lobject.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lopcodes.o lopcodes.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lparser.o lparser.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lstate.o lstate.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lstring.o lstring.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ltable.o ltable.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ltm.o ltm.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lundump.o lundump.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lvm.o lvm.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lzio.o lzio.c

gcc -O2 -Wall -DLUA_ANSI    -c -o strbuf.o strbuf.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lauxlib.o lauxlib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lbaselib.o lbaselib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ldblib.o ldblib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o liolib.o liolib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lmathlib.o lmathlib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o loslib.o loslib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o ltablib.o ltablib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lstrlib.o lstrlib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o loadlib.o loadlib.c

gcc -O2 -Wall -DLUA_ANSI    -c -o linit.o linit.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lua_cjson.o lua_cjson.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lua_struct.o lua_struct.c

gcc -O2 -Wall -DLUA_ANSI    -c -o lua_cmsgpack.o lua_cmsgpack.c

lua_cmsgpack.c: In function ‘table_is_an_array’:

lua_cmsgpack.c:370:21: warning: variable ‘max’ set but not used [-Wunused-but-set-variable]

ar rcu liblua.a lapi.o lcode.o ldebug.o ldo.o ldump.o lfunc.o lgc.o llex.o lmem.o lobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o ltm.o lundump.o lvm.o lzio.o strbuf.o lauxlib.o lbaselib.o ldblib.o liolib.o lmathlib.o loslib.o ltablib.o lstrlib.o loadlib.o linit.o lua_cjson.o lua_struct.o lua_cmsgpack.o # DLL needs all object files

ranlib liblua.a

gcc -O2 -Wall -DLUA_ANSI    -c -o lua.o lua.c

gcc -o lua  lua.o liblua.a -lm 

gcc -O2 -Wall -DLUA_ANSI    -c -o luac.o luac.c

gcc -O2 -Wall -DLUA_ANSI    -c -o print.o print.c

gcc -o luac  luac.o print.o liblua.a -lm 

make[3]: Leaving directory `/usr/local/src/redis-stable/deps/lua/src'

MAKE jemalloc

cd jemalloc && ./configure --with-jemalloc-prefix=je_ --enable-cc-silence CFLAGS="-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops " LDFLAGS=""

checking for xsltproc... /bin/xsltproc

checking for gcc... gcc

checking whether the C compiler works... yes

checking for C compiler default output file name... a.out

checking for suffix of executables... 

checking whether we are cross compiling... no

checking for suffix of object files... o

checking whether we are using the GNU C compiler... yes

checking whether gcc accepts -g... yes

checking for gcc option to accept ISO C89... none needed

checking how to run the C preprocessor... gcc -E

checking for grep that handles long lines and -e... /bin/grep

checking for egrep... /bin/grep -E

checking for ANSI C header files... yes

checking for sys/types.h... yes

checking for sys/stat.h... yes

checking for stdlib.h... yes

checking for string.h... yes

checking for memory.h... yes

checking for strings.h... yes

checking for inttypes.h... yes

checking for stdint.h... yes

checking for unistd.h... yes

checking size of void *... 4

checking size of int... 4

checking size of long... 4

checking size of intmax_t... 8

checking build system type... i686-pc-linux-gnu

checking host system type... i686-pc-linux-gnu

checking whether __asm__ is compilable... yes

checking whether __attribute__ syntax is compilable... yes

checking whether compiler supports -fvisibility=hidden... yes

checking whether compiler supports -Werror... yes

checking whether tls_model attribute is compilable... no

checking for a BSD-compatible install... /bin/install -c

checking for ranlib... ranlib

checking for ar... /bin/ar

checking for ld... /bin/ld

checking for autoconf... no

checking for memalign... yes

checking for valloc... yes

checking configured backtracing method... N/A

checking for sbrk... yes

checking whether utrace(2) is compilable... no

checking whether valgrind is compilable... no

checking STATIC_PAGE_SHIFT... 12

checking pthread.h usability... yes

checking pthread.h presence... yes

checking for pthread.h... yes

checking for pthread_create in -lpthread... yes

checking for _malloc_thread_cleanup... no

checking for _pthread_mutex_init_calloc_cb... no

checking for TLS... yes

checking whether a program using ffsl is compilable... yes

checking whether atomic(9) is compilable... no

checking whether Darwin OSAtomic*() is compilable... no

checking whether to force 32-bit __sync_{add,sub}_and_fetch()... no

checking whether to force 64-bit __sync_{add,sub}_and_fetch()... no

checking whether Darwin OSSpin*() is compilable... no

checking for stdbool.h that conforms to C99... yes

checking for _Bool... yes

configure: creating ./config.status

config.status: creating Makefile

config.status: creating doc/html.xsl

config.status: creating doc/manpages.xsl

config.status: creating doc/jemalloc.xml

config.status: creating include/jemalloc/jemalloc.h

config.status: creating include/jemalloc/internal/jemalloc_internal.h

config.status: creating test/jemalloc_test.h

config.status: creating config.stamp

config.status: creating bin/jemalloc.sh

config.status: creating include/jemalloc/jemalloc_defs.h

config.status: executing include/jemalloc/internal/size_classes.h commands

===============================================================================

jemalloc version   : 3.2.0-0-g87499f6748ebe4817571e817e9f680ccb5bf54a9

library revision   : 1


CC                 : gcc

CPPFLAGS           :  -D_GNU_SOURCE -D_REENTRANT

CFLAGS             : -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -fvisibility=hidden

LDFLAGS            : 

LIBS               :  -lm -lpthread

RPATH_EXTRA        : 


XSLTPROC           : /bin/xsltproc

XSLROOT            : /usr/share/sgml/docbook/xsl-stylesheets


PREFIX             : /usr/local

BINDIR             : /usr/local/bin

INCLUDEDIR         : /usr/local/include

LIBDIR             : /usr/local/lib

DATADIR            : /usr/local/share

MANDIR             : /usr/local/share/man


srcroot            : 

abs_srcroot        : /usr/local/src/redis-stable/deps/jemalloc/

objroot            : 

abs_objroot        : /usr/local/src/redis-stable/deps/jemalloc/


JEMALLOC_PREFIX    : je_

JEMALLOC_PRIVATE_NAMESPACE

                   : 

install_suffix     : 

autogen            : 0

experimental       : 1

cc-silence         : 1

debug              : 0

stats              : 1

prof               : 0

prof-libunwind     : 0

prof-libgcc        : 0

prof-gcc           : 0

tcache             : 1

fill               : 1

utrace             : 0

valgrind           : 0

xmalloc            : 0

mremap             : 0

munmap             : 0

dss                : 0

lazy_lock          : 0

tls                : 1

===============================================================================

cd jemalloc && make CFLAGS="-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops " LDFLAGS="" lib/libjemalloc.a

make[3]: Entering directory `/usr/local/src/redis-stable/deps/jemalloc'

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c

src/jemalloc.c: In function ‘je_realloc’:

src/jemalloc.c:1082:9: warning: variable ‘old_rzsize’ set but not used [-Wunused-but-set-variable]

src/jemalloc.c: In function ‘je_free’:

src/jemalloc.c:1230:10: warning: variable ‘rzsize’ set but not used [-Wunused-but-set-variable]

src/jemalloc.c: In function ‘je_rallocm’:

src/jemalloc.c:1477:9: warning: variable ‘old_rzsize’ set but not used [-Wunused-but-set-variable]

src/jemalloc.c: In function ‘je_dallocm’:

src/jemalloc.c:1622:9: warning: variable ‘rzsize’ set but not used [-Wunused-but-set-variable]

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/arena.o src/arena.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/atomic.o src/atomic.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/base.o src/base.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/bitmap.o src/bitmap.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk.o src/chunk.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk_dss.o src/chunk_dss.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk_mmap.o src/chunk_mmap.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/ckh.o src/ckh.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/ctl.o src/ctl.c

src/ctl.c: In function ‘epoch_ctl’:

src/ctl.c:1112:11: warning: variable ‘newval’ set but not used [-Wunused-but-set-variable]

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/extent.o src/extent.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/hash.o src/hash.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/huge.o src/huge.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/mb.o src/mb.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/prof.o src/prof.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/quarantine.o src/quarantine.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/rtree.o src/rtree.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/stats.o src/stats.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tcache.o src/tcache.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/util.o src/util.c

gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops  -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tsd.o src/tsd.c

ar crus lib/libjemalloc.a src/jemalloc.o src/arena.o src/atomic.o src/base.o src/bitmap.o src/chunk.o src/chunk_dss.o src/chunk_mmap.o src/ckh.o src/ctl.o src/extent.o src/hash.o src/huge.o src/mb.o src/mutex.o src/prof.o src/quarantine.o src/rtree.o src/stats.o src/tcache.o src/util.o src/tsd.o

make[3]: Leaving directory `/usr/local/src/redis-stable/deps/jemalloc'

make[2]: Leaving directory `/usr/local/src/redis-stable/deps'

    CC adlist.o

    CC ae.o

    CC anet.o

    CC dict.o

    CC redis.o

    CC sds.o

    CC zmalloc.o

    CC lzf_c.o

    CC lzf_d.o

    CC pqsort.o

    CC zipmap.o

    CC sha1.o

    CC ziplist.o

    CC release.o

    CC networking.o

    CC util.o

    CC object.o

    CC db.o

    CC replication.o

    CC rdb.o

    CC t_string.o

    CC t_list.o

    CC t_set.o

    CC t_zset.o

    CC t_hash.o

    CC config.o

    CC aof.o

    CC pubsub.o

    CC multi.o

    CC debug.o

    CC sort.o

    CC intset.o

    CC syncio.o

    CC migrate.o

    CC endianconv.o

    CC slowlog.o

    CC scripting.o

    CC bio.o

    CC rio.o

    CC rand.o

    CC memtest.o

    CC crc64.o

    CC bitops.o

    CC sentinel.o

    LINK redis-server

    INSTALL redis-sentinel

    CC redis-cli.o

    LINK redis-cli

    CC redis-benchmark.o

    LINK redis-benchmark

    CC redis-check-dump.o

    LINK redis-check-dump

    CC redis-check-aof.o

    LINK redis-check-aof


Hint: To run 'make test' is a good idea ;)


make[1]: Leaving directory `/usr/local/src/redis-stable/src'


*  redis start

# cd src

# ./redis-server --port 6379







* rpm install

download rpm

http://rpmfind.net/linux/rpm2html/search.php?query=redis


# rpm -Uvh redis-2.6.16-1.fc17.remi.i686.rpm 

warning: redis-2.6.16-1.fc17.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY

error: Failed dependencies:

libtcmalloc.so.4 is needed by redis-2.6.16-1.fc17.remi.i686

# yum install libtcmalloc.so.4

Loaded plugins: langpacks, presto, refresh-packagekit

Resolving Dependencies

--> Running transaction check

---> Package gperftools-libs.i686 0:2.0-9.fc17 will be installed

--> Processing Dependency: libunwind.so.8 for package: gperftools-libs-2.0-9.fc17.i686

--> Running transaction check

---> Package libunwind.i686 0:1.0.1-3.fc17 will be installed

--> Finished Dependency Resolution


Dependencies Resolved


# rpm -qi redis

Name        : redis

Version     : 2.6.16

Release     : 1.fc17.remi

Architecture: i686

Install Date: Fri 04 Oct 2013 03:58:37 PM KST

Group       : Applications/Databases

Size        : 891796

License     : BSD

Signature   : DSA/SHA1, Mon 09 Sep 2013 12:34:49 AM KST, Key ID 004e6f4700f97f56

Source RPM  : redis-2.6.16-1.fc17.remi.src.rpm

Build Date  : Mon 09 Sep 2013 12:23:49 AM KST

Build Host  : schrodingerscat.famillecollet.com

Relocations : (not relocatable)

Packager    : http://blog.famillecollet.com/

Vendor      : Remi Collet

URL         : http://redis.io

Summary     : A persistent key-value database

Description :

Redis is an advanced key-value store. It is similar to memcached but the data

set is not volatile, and values can be strings, exactly like in memcached, but

also lists, sets, and ordered sets. All this data types can be manipulated with

atomic operations to push/pop elements, add/remove elements, perform server side

union, intersection, difference between sets, and so forth. Redis supports

different kind of sorting abilities.


# rpm -ql redis

/etc/logrotate.d/redis

/etc/redis.conf

/usr/bin/redis-benchmark

/usr/bin/redis-check-aof

/usr/bin/redis-check-dump

/usr/bin/redis-cli

/usr/lib/systemd/system/redis.service

/usr/sbin/redis-server

/usr/share/doc/redis-2.6.16

/usr/share/doc/redis-2.6.16/00-RELEASENOTES

/usr/share/doc/redis-2.6.16/BUGS

/usr/share/doc/redis-2.6.16/CONTRIBUTING

/usr/share/doc/redis-2.6.16/COPYING

/usr/share/doc/redis-2.6.16/README

/var/lib/redis

/var/log/redis

/var/run/redis


# service redis start

Redirecting to /bin/systemctl start  redis.service

# ps -ef | grep redis

redis    10575     1  0 16:08 ?        00:00:00 /usr/sbin/redis-server /etc/redis.conf

root     10579  2556  0 16:08 pts/0    00:00:00 grep --color=auto redis

# netstat -antp | grep redis

tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      10575/redis-server  

# service redis stop

Redirecting to /bin/systemctl stop  redis.service

# ps -ef | grep redis

root     10594  2556  0 16:09 pts/0    00:00:00 grep --color=auto redis

# netstat -antp | grep redis






@@ 설치

# wget http://download.redis.io/releases/redis-2.8.13.tar.gz

# tar xzf redis-2.8.13.tar.gz

# cd redis-2.8.13

# make


# mv redis-2.8.13 /usr/local/

# ln -s redis-2.8.13 redis


# vi /etc/profile

...

# for redis

export REDIS_HOME=/usr/local/redis

export PATH=$PATH:$REDIS_HOME/src


# . /etc/profile



@@ 커널 파라미터 설정

# vi /etc/sysctl.conf

...

vm.overcommit_memory = 1


# sysctl vm.overcommit_memory=1



@@ 서비스 설정

# ./usr/local/redis/utils/install_server.sh

Welcome to the redis service installer

This script will help you easily set up a running redis server


Please select the redis port for this instance: [6379] 

Selecting default: 6379

Please select the redis config file name [/etc/redis/6379.conf] 

Selected default - /etc/redis/6379.conf

Please select the redis log file name [/var/log/redis_6379.log] 

Selected default - /var/log/redis_6379.log

Please select the data directory for this instance [/var/lib/redis/6379] 

Selected default - /var/lib/redis/6379

Please select the redis executable path [/usr/local/redis/src/redis-server] 

Selected config:

Port           : 6379

Config file    : /etc/redis/6379.conf

Log file       : /var/log/redis_6379.log

Data dir       : /var/lib/redis/6379

Executable     : /usr/local/redis/src/redis-server

Cli Executable : /usr/local/redis/src/redis-cli

Is this ok? Then press ENTER to go on or Ctrl-C to abort.

Copied /tmp/6379.conf => /etc/init.d/redis_6379

Installing service...

Successfully added to chkconfig!

Successfully added to runlevels 345!

Starting Redis server...

Installation successful!


@@ 암호 설정

# vi /etc/redis/6379.conf

...

requirepass foobared


@@ 서비스 시작/종료

# service redis_6379 start/stop


@@ 서비스 확인

# redis-cli

127.0.0.1:6379> auth password

OK

127.0.0.1:6379> ping

PONG




Posted by linuxism
,


Apache Tomcat 6.0

Clustering/Session Replication HOW-TO

Important Note

You can also check the configuration reference documentation.

Table of Contents
For the impatient

Simply add

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
to your <Engine> or your <Host> element to enable clustering.

Using the above configuration will enable all-to-all session replication using the DeltaManager to replicate session deltas. By all-to-all we mean that the session gets replicated to all the other nodes in the cluster. This works great for smaller cluster but we don't recommend it for larger clusters(a lot of tomcat nodes). Also when using the delta manager it will replicate to all nodes, even nodes that don't have the application deployed.
To get around this problem, you'll want to use the BackupManager. This manager only replicates the session data to one backup node, and only to nodes that have the application deployed. Downside of the BackupManager: not quite as battle tested as the delta manager. 
Here are some of the important default values:
1. Multicast address is 228.0.0.4
2. Multicast port is 45564 (the port and the address together determine cluster membership.
3. The IP broadcasted is java.net.InetAddress.getLocalHost().getHostAddress() (make sure you don't broadcast 127.0.0.1, this is a common error)
4. The TCP port listening for replication messages is the first available server socket in range 4000-4100
5. Two listeners are configured ClusterSessionListener and JvmRouteSessionIDBinderListener
6. Two interceptors are configured TcpFailureDetector and MessageDispatch15Interceptor
The following is the default cluster configuration:

        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="8">

          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=""/>
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>    
    

Will cover this section in more detail later in this document.

Cluster Basics

To run session replication in your Tomcat 6.0 container, the following steps should be completed:

  • All your session attributes must implement java.io.Serializable
  • Uncomment the Cluster element in server.xml
  • If you have defined custom cluster valves, make sure you have the ReplicationValve defined as well under the Cluster element in server.xml
  • If your Tomcat instances are running on the same machine, make sure the tcpListenPort attribute is unique for each instance, in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100
  • Make sure your web.xml has the <distributable/> element
  • If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <Engine name="Catalina" jvmRoute="node01" > and that the jvmRoute attribute value matches your worker name in workers.properties
  • Make sure that all nodes have the same time and sync with NTP service!
  • Make sure that your loadbalancer is configured for sticky session mode.

Load balancing can be achieved through many techniques, as seen in the Load Balancing chapter.

Note: Remember that your session state is tracked by a cookie, so your URL must look the same from the out side otherwise, a new session will be created.

Note: Clustering support currently requires the JDK version 1.5 or later.

The Cluster module uses the Tomcat JULI logging framework, so you can configure logging through the regular logging.properties file. To track messages, you can enable logging on the key:org.apache.catalina.tribes.MESSAGES

Overview

To enable session replication in Tomcat, three different paths can be followed to achieve the exact same thing:

  1. Using session persistence, and saving the session to a shared file system (PersistenceManager + FileStore)
  2. Using session persistence, and saving the session to a shared database (PersistenceManager + JDBCStore)
  3. Using in-memory-replication, using the SimpleTcpCluster that ships with Tomcat 6 (lib/catalina-tribes.jar + lib/catalina-ha.jar)

In this release of session replication, Tomcat can perform an all-to-all replication of session state using the DeltaManager or perform backup replication to only one node using the BackupManager. The all-to-all replication is an algorithm that is only efficient when the clusters are small. For larger clusters, to use a primary-secondary session replication where the session will only be stored at one backup server simply setup the BackupManager. 
Currently you can use the domain worker attribute (mod_jk > 1.2.8) to build cluster partitions with the potential of having a more scaleable cluster solution with the DeltaManager(you'll need to configure the domain interceptor for this). In order to keep the network traffic down in an all-to-all environment, you can split your cluster into smaller groups. This can be easily achieved by using different multicast addresses for the different groups. A very simple setup would look like this:

        DNS Round Robin
               |
         Load Balancer
          /           \
      Cluster1      Cluster2
      /     \        /     \
  Tomcat1 Tomcat2  Tomcat3 Tomcat4

What is important to mention here, is that session replication is only the beginning of clustering. Another popular concept used to implement clusters is farming, i.e., you deploy your apps only to one server, and the cluster will distribute the deployments across the entire cluster. This is all capabilities that can go into with the FarmWarDeployer (s. cluster example at server.xml)

In the next section will go deeper into how session replication works and how to configure it.

Cluster Information

Membership is established using multicast heartbeats. Hence, if you wish to subdivide your clusters, you can do this by changing the multicast IP address or port in the <Membership> element.

The heartbeat contains the IP address of the Tomcat node and the TCP port that Tomcat listens to for replication traffic. All data communication happens over TCP.

The ReplicationValve is used to find out when the request has been completed and initiate the replication, if any. Data is only replicated if the session has changed (by calling setAttribute or removeAttribute on the session).

One of the most important performance considerations is the synchronous versus asynchronous replication. In a synchronous replication mode the request doesn't return until the replicated session has been sent over the wire and reinstantiated on all the other cluster nodes. Synchronous vs. asynchronous is configured using the channelSendOptions flag and is an integer value. The default value for theSimpleTcpCluster/DeltaManager combo is 8, which is asynchronous. You can read more on the send flag(overview) or the send flag(javadoc). During async replication, the request is returned before the data has been replicated. async replication yields shorter request times, and synchronous replication guarantees the session to be replicated before the request returns.

Bind session after crash to failover node

If you are using mod_jk and not using sticky sessions or for some reasons sticky session don't work, or you are simply failing over, the session id will need to be modified as it previously contained the worker id of the previous tomcat (as defined by jvmRoute in the Engine element). To solve this, we will use the JvmRouteBinderValve.

The JvmRouteBinderValve rewrites the session id to ensure that the next request will remain sticky (and not fall back to go to random nodes since the worker is no longer available) after a fail over. The valve rewrites the JSESSIONID value in the cookie with the same name. Not having this valve in place, will make it harder to ensure stickyness in case of a failure for the mod_jk module.

By default, if no valves are configured, the JvmRouteBinderValve is added on. The cluster message listener called JvmRouteSessionIDBinderListener is also defined by default and is used to actually rewrite the session id on the other nodes in the cluster once a fail over has occurred. Remember, if you are adding your own valves or cluster listeners in server.xml then the defaults are no longer valid, make sure that you add in all the appropriate valves and listeners as defined by the default.

Hint:
With attribute sessionIdAttribute you can change the request attribute name that included the old session id. Default attribute name isorg.apache.catalina.cluster.session.JvmRouteOrignalSessionID.

Trick:
You can enable this mod_jk turnover mode via JMX before you drop a node to all backup nodes! Set enable true on all JvmRouteBinderValve backups, disable worker at mod_jk and then drop node and restart it! Then enable mod_jk Worker and disable JvmRouteBinderValves again. This use case means that only requested session are migrated.

Configuration Example
        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="6">

          <Manager className="org.apache.catalina.ha.session.BackupManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"
                   mapSendOptions="6"/>
          <!--
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>
          -->        
          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="5000"
                      selectorTimeout="100"
                      maxThreads="6"/>

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
    

Break it down!!

        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="6">
    

The main element, inside this element all cluster details can be configured. The channelSendOptions is the flag that is attached to each message sent by the SimpleTcpCluster class or any objects that are invoking the SimpleTcpCluster.send method. The description of the send flags is available at our javadoc site The DeltaManager sends information using the SimpleTcpCluster.send method, while the backup manager sends it itself directly through the channel. 
For more info, Please visit the reference documentation

          <Manager className="org.apache.catalina.ha.session.BackupManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"
                   mapSendOptions="6"/>
          <!--
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>
          -->        
    

This is a template for the manager configuration that will be used if no manager is defined in the <Context> element. In Tomcat 5.x each webapp marked distributable had to use the same manager, this is no longer the case since Tomcat 6 you can define a manager class for each webapp, so that you can mix managers in your cluster. Obviously the managers on one node's application has to correspond with the same manager on the same application on the other node. If no manager has been specified for the webapp, and the webapp is marked <distributable/> Tomcat will take this manager configuration and create a manager instance cloning this configuration. 
For more info, Please visit the reference documentation

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    

The channel element is Tribes, the group communication framework used inside Tomcat. This element encapsulates everything that has to do with communication and membership logic. 
For more info, Please visit the reference documentation

            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
    

Membership is done using multicasting. Please note that Tribes also supports static memberships using the StaticMembershipInterceptor if you want to extend your membership to points beyond multicasting. The address attribute is the multicast address used and the port is the multicast port. These two together create the cluster separation. If you want a QA cluster and a production cluster, the easiest config is to have the QA cluster be on a separate multicast address/port combination the the production cluster.
The membership component broadcasts TCP adress/port of itselt to the other nodes so that communication between nodes can be done over TCP. Please note that the address being broadcasted is the one of the Receiver.address attribute. 
For more info, Please visit the reference documentation

            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="5000"
                      selectorTimeout="100"
                      maxThreads="6"/>
    

In tribes the logic of sending and receiving data has been broken into two functional components. The Receiver, as the name suggests is responsible for receiving messages. Since the Tribes stack is thread less, (a popular improvement now adopted by other frameworks as well), there is a thread pool in this component that has a maxThreads and minThreads setting.
The address attribute is the host address that will be broadcasted by the membership component to the other nodes. 
For more info, Please visit the reference documentation

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
    

The sender component, as the name indicates is responsible for sending messages to other nodes. The sender has a shell component, theReplicationTransmitter but the real stuff done is done in the sub component, Transport. Tribes support having a pool of senders, so that messages can be sent in parallel and if using the NIO sender, you can send messages concurrently as well.
Concurrently means one message to multiple senders at the same time and Parallel means multiple messages to multiple senders at the same time. 
For more info, Please visit the reference documentation

            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
          </Channel>
    

Tribes uses a stack to send messages through. Each element in the stack is called an interceptor, and works much like the valves do in the Tomcat servlet container. Using interceptors, logic can be broken into more managable pieces of code. The interceptors configured above are:
TcpFailureDetector - verifies crashed members through TCP, if multicast packets get dropped, this interceptor protects against false positives, ie the node marked as crashed even though it still is alive and running.
MessageDispatch15Interceptor - dispatches messages to a thread (thread pool) to send message asynchrously.
ThroughputInterceptor - prints out simple stats on message traffic.
Please note that the order of interceptors is important. the way they are defined in server.xml is the way they are represented in the channel stack. Think of it as a linked list, with the head being the first most interceptor and the tail the last. 
For more info, Please visit the reference documentation

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
    

The cluster uses valves to track requests to web applications, we've mentioned the ReplicationValve and the JvmRouteBinderValve above. The <Cluster> element itself is not part of the pipeline in Tomcat, instead the cluster adds the valve to its parent container. If the <Cluster> elements is configured in the <Engine> element, the valves get added to the engine and so on. 
For more info, Please visit the reference documentation

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>
    

The default tomcat cluster supports farmed deployment, ie, the cluster can deploy and undeploy applications on the other nodes. The state of this component is currently in flux but will be addressed soon. There was a change in the deployment algorithm between Tomcat 5.0 and 5.5 and at that point, the logic of this component changed to where the deploy dir has to match the webapps directory. 
For more info, Please visit the reference documentation

          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
    

Since the SimpleTcpCluster itself is a sender and receiver of the Channel object, components can register themselves as listeners to the SimpleTcpCluster. The listener above ClusterSessionListener listens for DeltaManager replication messages and applies the deltas to the manager that in turn applies it to the session. 
For more info, Please visit the reference documentation

Cluster Architecture

Component Levels:

         Server
           |
         Service
           |
         Engine
           |  \ 
           |  --- Cluster --*
           |
         Host
           |
         ------
        /      \
     Cluster    Context(1-N)                 
        |             \
        |             -- Manager
        |                   \
        |                   -- DeltaManager
        |                   -- BackupManager
        |
     ---------------------------
        |                       \
      Channel                    \
    ----------------------------- \
        |                          \
     Interceptor_1 ..               \
        |                            \
     Interceptor_N                    \
    -----------------------------      \
     |          |         |             \
   Receiver    Sender   Membership       \
                                         -- Valve
                                         |      \
                                         |       -- ReplicationValve
                                         |       -- JvmRouteBinderValve 
                                         |
                                         -- LifecycleListener 
                                         |
                                         -- ClusterListener 
                                         |      \
                                         |       -- ClusterSessionListener
                                         |       -- JvmRouteSessionIDBinderListener
                                         |
                                         -- Deployer 
                                                \
                                                 -- FarmWarDeployer
      
      

How it Works

To make it easy to understand how clustering works, We are gonna take you through a series of scenarios. In the scenario we only plan to use two tomcat instances TomcatA and TomcatB. We will cover the following sequence of events:

  1. TomcatA starts up
  2. TomcatB starts up (Wait that TomcatA start is complete)
  3. TomcatA receives a request, a session S1 is created.
  4. TomcatA crashes
  5. TomcatB receives a request for session S1
  6. TomcatA starts up
  7. TomcatA receives a request, invalidate is called on the session (S1)
  8. TomcatB receives a request, for a new session (S2)
  9. TomcatA The session S2 expires due to inactivity.

Ok, now that we have a good sequence, we will take you through exactly what happens in the session repliction code

  1. TomcatA starts up

    Tomcat starts up using the standard start up sequence. When the Host object is created, a cluster object is associated with it. When the contexts are parsed, if the distributable element is in place in web.xml Tomcat asks the Cluster class (in this case SimpleTcpCluster) to create a manager for the replicated context. So with clustering enabled, distributable set in web.xml Tomcat will create a DeltaManagerfor that context instead of a StandardManager. The cluster class will start up a membership service (multicast) and a replication service (tcp unicast). More on the architecture further down in this document.

  2. TomcatB starts up

    When TomcatB starts up, it follows the same sequence as TomcatA did with one exception. The cluster is started and will establish a membership (TomcatA,TomcatB). TomcatB will now request the session state from a server that already exists in the cluster, in this case TomcatA. TomcatA responds to the request, and before TomcatB starts listening for HTTP requests, the state has been transferred from TomcatA to TomcatB. In case TomcatA doesn't respond, TomcatB will time out after 60 seconds, and issue a log entry. The session state gets transferred for each web application that has distributable in its web.xml. Note: To use session replication efficiently, all your tomcat instances should be configured the same.

  3. TomcatA receives a request, a session S1 is created.

    The request coming in to TomcatA is treated exactly the same way as without session replication. The action happens when the request is completed, the ReplicationValve will intercept the request before the response is returned to the user. At this point it finds that the session has been modified, and it uses TCP to replicata the session to TomcatB. Once the serialized data has been handed off to the operating systems TCP logic, the request returns to the user, back through the valve pipeline. For each request the entire session is replicated, this allows code that modifies attributes in the session without calling setAttribute or removeAttribute to be replicated. a useDirtyFlag configuration parameter can be used to optimize the number of times a session is replicated.

  4. TomcatA crashes

    When TomcatA crashes, TomcatB receives a notification that TomcatA has dropped out of the cluster. TomcatB removes TomcatA from its membership list, and TomcatA will no longer be notified of any changes that occurs in TomcatB. The load balancer will redirect the requests from TomcatA to TomcatB and all the sessions are current.

  5. TomcatB receives a request for session S1

    Nothing exciting, TomcatB will process the request as any other request.

  6. TomcatA starts up

    Upon start up, before TomcatA starts taking new request and making itself available to it will follow the start up sequence described above 1) 2). It will join the cluster, contact TomcatB for the current state of all the sessions. And once it receives the session state, it finishes loading and opens its HTTP/mod_jk ports. So no requests will make it to TomcatA until it has received the session state from TomcatB.

  7. TomcatA receives a request, invalidate is called on the session (S1)

    The invalidate is call is intercepted, and the session is queued with invalidated sessions. When the request is complete, instead of sending out the session that has changed, it sends out an "expire" message to TomcatB and TomcatB will invalidate the session as well.

  8. TomcatB receives a request, for a new session (S2)

    Same scenario as in step 3)

  9. TomcatA The session S2 expires due to inactivity.

    The invalidate is call is intercepted the same was as when a session is invalidated by the user, and the session is queued with invalidated sessions. At this point, the invalidet session will not be replicated across until another request comes through the system and checks the invalid queue.

Phuuuhh! :)

Membership Clustering membership is established using very simple multicast pings. Each Tomcat instance will periodically send out a multicast ping, in the ping message the instance will broad cast its IP and TCP listen port for replication. If an instance has not received such a ping within a given timeframe, the member is considered dead. Very simple, and very effective! Of course, you need to enable multicasting on your system.

TCP Replication Once a multicast ping has been received, the member is added to the cluster Upon the next replication request, the sending instance will use the host and port info and establish a TCP socket. Using this socket it sends over the serialized data. The reason I choose TCP sockets is because it has built in flow control and guaranteed delivery. So I know, when I send some data, it will make it there :)

Distributed locking and pages using frames Tomcat does not keep session instances in sync across the cluster. The implementation of such logic would be to much overhead and cause all kinds of problems. If your client accesses the same session simultanously using multiple requests, then the last request will override the other sessions in the cluster.

FAQ

Please see the clustering section of the FAQ.



출처 - http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html










Posted by linuxism
,


오디날(ordinal) 에러             

 

ordinal이란 해당 DLL이 컴파일 될 때 컴파일러가 익스포트된

 

함수를 순서대로 번호로 매겨 놓은 것입니다. 

*.def 파일을 사용해서 DLL개발자가 순서를 매겨 줄 수도

 

있습니다.

 

예를 들어, wsock32.dll의 recv()라는 함수의 ordinal 번호는

 

16번이고 send()의 번호는 19번이죠. DLL에서 함수를 불러

 

쓸 때 "recv"나 "send" 등의 이름으로 불러 쓸 수도 있습니다.

질문하신 것에서

 

||||| 오디날(ordinal) [숫자]. DLL

          [이름].dll에서 찾을 수 없습니다 |||||

 

라고 되었나요?

ordinal이 [숫자]까지 올라갈 것은 별로 없는데...

 

MFC 관련 DLL을 제외하고는 말이죠.


어쨌든 이 에러는 해당 DLL에 [숫자]번째 함수가 없다는

 

것을 의미합니다. 실제 이런 에러는 프로그램을 잘못

 

설치해서 윈도우 System 디렉토리에 구버전의

 

[이름].DLL이 설치되는 경우가 자주 발생합니다.

 

예를 들면 컴파일 할 때 쓰인 Lib파일은 6.0.1234용

 

버전을 사용해야 하는데 막상 실제 [이름].DLL은

 

4.0.6787이라던지 원래 버전을 초과해버리는

 

그런 경우가 있다면 원래 버전 (6.0.1234)에 있는

 

함수가 없을 수도 있습니다. 이런 경우를 예방하기

 

위해 Install Shield같은 프로그램이 있지만서도

 

가끔 무시하는 경우도 있는 듯 합니다.

 

해결방법은 먼저 윈도우 System 디렉토리의 해당 DLL의

 

버전을 확인하시고, 높은 버전의 DLL을 덮어씌우면 됩니다.

 

특히 MFC42 같은 DLL이라면 (에러메시지를 보신분이 이런

 

형태의 DLL이라고 가정할 때) 하드디스크의 파일을 잘

 

뒤져보시면 서로 몇 가지의 MFC42.dll을 찾을 수 있을

 

것입니다. 정 그래도 안 될 경우에는 VC를 다시 설치하시면

 

되겠죠.^^ 컴퓨터의 중요파일은 안전하게 보관하시는 게 좋습니다.

(저기요...이봐요...여어...! 덧글써주세요...^^)



출처 - http://blog.naver.com/PostView.nhn?blogId=ins_soul80&logNo=20030252757



'Development > Windows Programming' 카테고리의 다른 글

windows prog - msi  (0) 2013.10.10
Posted by linuxism
,