linux - sudo

System/Linux 2014. 2. 23. 22:54


sudo 명령어는 유닉스 및 유닉스 계열 운영 체제에서 다른 사용자의 보안권한과 관련된 프로그램을 구동할 수 있게 해주는 프로그램이다. 이것은 substitute user do (다른 사용자의 권한으로 명령을 이행하라, 는 뜻이다.) 의 줄임말이다. 기본적으로 Sudo는 사용자 비밀번호를 요구하지만 루트 비밀번호(root password)가 필요할 수 도 있고, 한 터미널에 한번만 입력하고 그 다음부터는 비밀번호가 필요 없다.[2] Sudo는 각 명령줄에 사용할 수 있으며 일부 상황에서는 관리자 권한을 위한 슈퍼유저 로그인(superuser login)을 완벽히 대신하며, 주로 우분투 리눅스와 애플의 맥 오에스 텐 에서 볼 수 있다.[3][4]

이 프로그램을 처음 쓴 사람은 Bob Coggeshall 과 Cliff Spencer 이며, 그들은 이 프로그램을 뉴욕의 버팔로 대학의 컴퓨터과학부에서 "1980년 근처에" 썼다. 현재 버전은 활발히 개발중이며 OpenBSD의 개발자 Todd C Miller가 유지하고 있고 BSD 라이선스로 배포되고 있다. [5]

2009년에 난감하게도 MS가 sudo 명령어를 특허로 등록했다는 것이 밝혀져 큰 파장을 일으켰으나 [6], 그 청구항들은 sudo의 개념이라기보다는 특정한 GUI에 대해 좁게 고안된 것이었다 [7].

예제[편집]

sudo 명령어를 실행하기 전에, 사용자들은 비밀번호를 입력한다. 한 번 승인되고 만약 /etc/sudoers 설정 파일이 그 유저를 승인한다면, 명령은 실행된다. kdesukdesudogksugksudo[8]와 같이 GUI환경에서 사용할 수 있는 몇몇 명령어 들이 있다.[9] 다음은 접근이 거부된 예이다.

 snorri@rimu:~$ sudo emacs /etc/resolv.conf 
 We assume you have received the usual lecture from the local System
 Administrator. It usually boils down to these three things:
 
 #1) Respect the privacy of others.
 #2) Think before you type.
 #3) With great power comes great responsibility.
 
 Password:
 snorri is not in the sudoers file. This incident will be reported.

아래의 로그는 실패한 시도와, snorri를 /ect/sudoers: 에 추가한 뒤에 성공한 시도이다.

 snorri@rimu:~$ sudo tail /var/log/auth.log
 Aug 5 06:00:28 localhost sudo: snorri : user NOT in sudoers ; TTY=pts/1 ; PWD =/home/snorri ; USER=root ; COMMAND=/usr/bin/emacs /etc/resolv.conf
 Aug 5 06:01:15 localhost su[15573](pam_unix) session opened for user root by snorri(uid=1000)
 Aug 5 06:02:09 localhost sudo: snorri : TTY=pts/1 ; PWD=/home/snorri ; USER=root ; COMMAND=/usr/bin/emacs /etc/resolv.conf
 Aug 5 06:02:49 localhost sudo: snorri : TTY=pts/1 ; PWD=/home/snorri ; USER=root ; COMMAND=/usr/bin/tail /var/log/auth.log

runas, su, 그리고 sudo[편집]

윈도는 runas라고 불리는 명령어를 가지고 있다. 이것의 기능은 비슷하나, runas도 아니고 UAC(사용자 계정 컨트롤)도 아닌 것이 sudo이다. - 그들은 권한을 추가하기 보다는 다른 사용자를 가장한다.

runas와 su:

  • 권한이 부여된 유저가 그들 고유의 글을 이용하여 높은 권한의 프로세스를 실행하는 것을 허락치 않는다.
  • 사용자의 프로파일과 객체의 소유권을 보존하지 않는다.

runas명령어는 sudo가 아니라 유닉스의 su와 더 동등하다. sudo가 su에 비해 더 우수한 이유는 su는 사용자의 고유 신분에 기반해 권한이동을 엑세스 하고, 가장 중요한 것은 sudo는 비밀번호 공유가 필요 없기 때문이다. runas나 su를 특권 계정을 엑세스하기 위해 사용하는 것은 관리자-가능 계정의 비밀번호를 유포하는것이 필요하기 때문에, sudo에는 없는 보안상의 약점을 가지고 있다.

주석[편집]

  1. 이동 Sudo License
  2. 이동 Manpage for sudo. 2007년 11월 4일에 확인.
  3. 이동 RootSudo - Community Ubuntu Documentation
  4. 이동 MacDevCenter.com - Top Ten Mac OS X Tips for Unix Geeks
  5. 이동 Miller, Todd C. A Brief History of Sudo. 2007년 3월 5일에 확인.
  6. 이동 Lilly, Paul. Microsoft has Patented "sudo." Yes, the Command. 2009년 11월 13일에 확인.
  7. 이동 http://blog.seattlepi.com/microsoft/2009/11/12/did-microsoft-just-sneakily-patent-an-open-source-tool/
  8. 이동 맥 오에스 텐 역시 권한부여 서비스가 있다.
  9. 이동 Introduction to Authorization Services Programming Guide

읽을거리[편집]

  • visudo/etc/sudoers파일을 수정하여 사용하는 vi기반 프로그램.

바깥 고리[편집]



출처 - http://ko.wikipedia.org/wiki/Sudo








리눅스 sudo 패스워드 없이 사용


Linux sudo 패스워드 없이 사용
sudo 패스워드 입력 없이 사용
패스워드 물어보지 않고 sudo 실행

방법

사용자명 ALL=NOPASSWD: ALL
→ 여기서 NOPASSWD:를 빼면 sudo 실행시 자신의 패스워드를 입력해야만 함
사용자명 ALL=NOPASSWD: 명령어1, 명령어2
→ 지정한 명령어들만 sudo 사용가능

실습 1: 모든 명령어 사용가능

  • 신규 계정 testuser1 생성
[root@localhost ~]# useradd testuser1
[root@localhost ~]# echo 'P@ssw0rd1' | passwd --stdin testuser1
Changing password for user testuser1.
passwd: all authentication tokens updated successfully.
[root@localhost ~]# cat /etc/passwd | grep testuser1
testuser1:x:500:500::/home/testuser1:/bin/bash
  • testuser1 sudo 권한 추가(+NOPASSWD)
[root@localhost ~]# echo 'testuser1 ALL=NOPASSWD: ALL' >> /etc/sudoers
[root@localhost ~]# cat /etc/sudoers | tail -2
#includedir /etc/sudoers.d
testuser1 ALL=NOPASSWD: ALL
  • testuser1 계정으로 sudo 실행
[root@localhost ~]# su - testuser1
[testuser1@localhost ~]$ reboot
reboot: Need to be root
[testuser1@localhost ~]$ sudo reboot
The system is going down for reboot NOW!
→ 패스워드 입력 없이 sudo reboot 가능

실습 2: 지정한 명령어만 사용가능

[root@localhost ~]# visudo
변경 전
... (생략)
#includedir /etc/sudoers.d
testuser1 ALL=NOPASSWD: ALL
변경 후
... (생략)
#includedir /etc/sudoers.d
testuser1 ALL=NOPASSWD: /usr/sbin/useradd, /usr/sbin/userdel
[root@localhost ~]# cat /etc/sudoers | tail -2
#includedir /etc/sudoers.d
testuser1 ALL=NOPASSWD: /usr/sbin/useradd, /usr/sbin/userdel
  • testuser1 계정으로 sudo 실행 테스트
[root@localhost ~]# su - testuser1
[testuser1@localhost ~]$ sudo reboot
[sudo] password for testuser1: 
Sorry, user testuser1 is not allowed to execute '/sbin/reboot' as root on localhost.localdomain.
→ sudo reboot에 대해서는 패스워드 물어본다.
→ 자신의 패스워드(P@ssw0rd1)를 정확히 입력해도 권한없어 실행불가
[testuser1@localhost ~]$ sudo useradd mallory
[testuser1@localhost ~]$ cat /etc/passwd | grep mallory
mallory:x:501:501::/home/mallory:/bin/bash
→ sudo useradd에 대해서는 패스워드 입력 없이 실행가능

같이 보기



출처 - http://zetawiki.com/wiki/%EB%A6%AC%EB%88%85%EC%8A%A4_sudo_%ED%8C%A8%EC%8A%A4%EC%9B%8C%EB%93%9C_%EC%97%86%EC%9D%B4_%EC%82%AC%EC%9A%A9



'System > Linux' 카테고리의 다른 글

linux - cgroups (control groups)  (0) 2014.03.02
linux - rc.local  (0) 2014.03.02
linux - /var  (0) 2014.02.23
linux - install problem wine in fedora 17  (0) 2014.02.16
linux - grub2 config  (0) 2014.02.15
Posted by linuxism
,

linux - /var

System/Linux 2014. 2. 23. 18:04


1.18. /var

Contains variable data like system logging files, mail and printer spool directories, and transient and temporary files. Some portions of /var are not shareable between different systems. For instance, /var/log, /var/lock, and /var/run. Other portions may be shared, notably /var/mail, /var/cache/man, /var/cache/fonts, and /var/spool/news. Why not put it into /usr? Because there might be circumstances when you may want to mount /usr as read-only, e.g. if it is on a CD or on another computer. '/var' contains variable data, i.e. files and directories the system must be able to write to during operation, whereas /usr should only contain static data. Some directories can be put onto separate partitions or systems, e.g. for easier backups, due to network topology or security concerns. Other directories have to be on the root partition, because they are vital for the boot process. 'Mountable' directories are: '/home', '/mnt', '/tmp', '/usr' and '/var'. Essential for booting are: '/bin', '/boot', '/dev', '/etc', '/lib', '/proc' and '/sbin'.

If /var cannot be made a separate partition, it is often preferable to move /
var out of the root partition and into the /usr partition. (This is sometimes
done to reduce the size of the root partition or when space runs low in the
root partition.) However, /var must not be linked to /usr because this makes
separation of /usr and /var more difficult and is likely to create a naming
conflict. Instead, link /var to /usr/var.
  
Applications must generally not add directories to the top level of /var. Such
directories should only be added if they have some system-wide implication, and
in consultation with the FHS mailing list.
  

/var/backups

Directory containing backups of various key system files such as /etc/shadow, /etc/group, /etc/inetd.conf and dpkg.status. They are normally renamed to something like dpkg.status.0, group.bak, gshadow.bak, inetd.conf.bak, passwd.bak, shadow.bak

/var/cache

Is intended for cached data from applications. Such data is locally generated as a result of time-consuming I/O or calculation. This data can generally be regenerated or be restored. Unlike /var/spool, files here can be deleted without data loss. This data remains valid between invocations of the application and rebooting of the system. The existence of a separate directory for cached data allows system administrators to set different disk and backup policies from other directories in /var.

/var/cache/fonts

Locally-generated fonts. In particular, all of the fonts which are automatically generated by mktexpk must be located in appropriately-named subdirectories of /var/cache/ fonts.

/var/cache/man

A cache for man pages that are formatted on demand. The source for manual pages is usually stored in /usr/share/man/; some manual pages might come with a pre-formatted version, which is stored in /usr/share/man/cat* (this is fairly rare now). Other manual pages need to be formatted when they are first viewed; the formatted version is then stored in /var/man so that the next person to view the same page won't have to wait for it to be formatted (/var/catman is often cleaned in the same way temporary directories are cleaned).

/var/cache/'package-name'

Package specific cache data.

/var/cache/www

WWW proxy or cache data.

/var/crash

This directory will eventually hold system crash dumps. Currently, system crash dumps are not supported under Linux. However, development is already complete and is awaiting unification into the Linux kernel.

/var/db

Data bank store.

/var/games

Any variable data relating to games in /usr is placed here. It holds variable data that was previously found in /usr. Static data, such as help text, level descriptions, and so on, remains elsewhere though, such as in /usr/share/games. The separation of /var/games and /var/lib as in release FSSTND 1.2 allows local control of backup strategies, permissions, and disk usage, as well as allowing inter-host sharing and reducing clutter in /var/lib. Additionally, /var/games is the path traditionally used by BSD.

/var/lib

Holds dynamic data libraries/files like the rpm/dpkg database and game scores. Furthermore, this hierarchy holds state information pertaining to an application or the system. State information is data that programs modify while they run, and that pertains to one specific host. Users shouldn't ever need to modify files in /var/lib to configure a package's operation. State information is generally used to preserve the condition of an application (or a group of inter-related applications) between invocations and between different instances of the same application. An application (or a group of inter-related applications) use a subdirectory of /var/lib for their data. There is one subdirectory, /var/lib/misc, which is intended for state files that don't need a subdirectory; the other subdirectories should only be present if the application in question is included in the distribution. /var/lib/'name' is the location that must be used for all distribution packaging support. Different distributions may use different names, of course.

/var/local

Variable data for local programs (i.e., programs that have been installed by the system administrator) that are installed in /usr/local (as opposed to a remotely mounted '/var' partition). Note that even locally installed programs should use the other /var directories if they are appropriate, e.g., /var/lock.

/var/lock

Many programs follow a convention to create a lock file in /var/lock to indicate that they are using a particular device or file. This directory holds those lock files (for some devices) and hopefully other programs will notice the lock file and won't attempt to use the device or file.

Lock files should be stored within the /var/lock directory structure. Lock files for devices and other resources shared by multiple applications, such as the serial device lock files that were originally found in either /usr/spool/locks or /usr/spool/uucp, must now be stored in /var/lock. The naming convention which must be used is LCK.. followed by the base name of the device file. For example, to lock /dev/ttyS0 the file LCK..ttyS0 would be created. The format used for the contents of such lock files must be the HDB UUCP lock file format. The HDB format is to store the process identifier (PID) as a ten byte ASCII decimal number, with a trailing newline. For example, if process 1230 holds a lock file, it would contain the eleven characters: space, space, space, space, space, space, one, two, three, zero, and newline.

/var/log

Log files from the system and various programs/services, especially login (/var/log/wtmp, which logs all logins and logouts into the system) and syslog (/var/log/messages, where all kernel and system program message are usually stored). Files in /var/log can often grow indefinitely, and may require cleaning at regular intervals. Something that is now normally managed via log rotation utilities such as 'logrotate'. This utility also allows for the automatic rotation compression, removal and mailing of log files. Logrotate can be set to handle a log file daily, weekly, monthly or when the log file gets to a certain size. Normally, logrotate runs as a daily cron job. This is a good place to start troubleshooting general technical problems.

/var/log/auth.log

Record of all logins and logouts by normal users and system processes.

/var/log/btmp

Log of all attempted bad logins to the system. Accessed via the lastb command.

/var/log/debug

Debugging output from various packages.

/var/log/dmesg

Kernel ring buffer. The content of this file is referred to by the dmesg command.

/var/log/gdm/

GDM log files. Normally a subset of the last X log file. See /var/log/xdm.log for mode details.

/var/log/kdm.log

KDM log file. Normally a subset of the last X log file. See /var/log/xdm.log for more details.

/var/log/messages

System logs.

/var/log/pacct

Process accounting is the bookkeeping of process activity. The raw data of process activity is maintained here. Three commands can be used to access the contents of this file dump-acct, sa (summary of process accounting) and lastcomm (list the commands executed on the system).

/var/log/utmp

Active user sessions. This is a data file and as such it can not be viewed normally. A human-readable form can be created via the dump-utmp command or through the w, who or users commands.

/var/log/wtmp

Log of all users who have logged into and out of the system. The last command can be used to access a human readable form of this file. It also lists every connection and run-level change.

/var/log/xdm.log

XDM log file. Normally subset of the last X startup log and pretty much useless in light of the details the X logs is able to provide us with. Only consult this file if you have XDM specific issues otherwise just use the X logfile.

/var/log/XFree86.0.log, /var/log/XFree86.?.log

X startup logfile. An excellent resource for uncovering problems with X configuration. Log files are numbered according to when they were last used. For example, the last log file would be stored in /var/log/XFree86.0.log, the next /var/log/XFree86.9.log, so on and so forth.

/var/log/syslog

The 'system' log file. The contents of this file is managed via the syslogd daemon which more often than not takes care of all log manipulation on most systems.

/var/mail

Contains user mailbox files. The mail files take the form /var/mail/'username' (Note that /var/mail may be a symbolic link to another directory). User mailbox files in this location are stored in the standard UNIX mailbox format. The reason for the location of this directory was to bring the FHS inline with nearly every UNIX implementation (it was previously located in /var/spool/mail). This change is important for inter-operability since a single /var/mail is often shared between multiple hosts and multiple UNIX implementations (despite NFS locking issues).

/var/opt

Variable data of the program packages in /opt must be installed in /var/opt/'package-name', where 'package-name' is the name of the subtree in /opt where the static data from an add-on software package is stored, except where superceded by another file in /etc. No structure is imposed on the internal arrangement of /var/opt/'package-name'.

/var/run

Contains the process identification files (PIDs) of system services and other information about the system that is valid until the system is next booted. For example, /var/run/utmp contains information about users currently logged in.

/var/spool

Holds spool files, for instance for mail, news, and printing (lpd) and other queued work. Spool files store data to be processed after the job currently occupying a device is finished or the appropriate cron job is started. Each different spool has its own subdirectory below /var/spool, e.g., the cron tables are stored in /var/spool/cron/crontabs.

/var/tmp

Temporary files that are large or that need to exist for a longer time than what is allowed for /tmp. (Although the system administrator might not allow very old files in /var/tmp either.)

/var/named

Database for BIND. The Berkeley Internet Name Domain (BIND) implements an Internet domain name server. BIND is the most widely used name server software on the Internet, and is supported by the Internet Software Consortium, www.isc.org.

/var/yp

Database for NIS (Network Information Services). NIS is mostly used to let several machines in a network share the same account information (eg. /etc/passwd). NIS was formerly called Yellow Pages (YP).

The following directories, or symbolic links to directories, are required in /var for FSSTND compliance:

  /var/cache	Application cache data
  /var/lib	Variable state information
  /var/local	Variable data for /usr/local
  /var/lock	Lock files
  /var/log	Log files and directories
  /var/opt	Variable data for /opt
  /var/run	Data relevant to running processes
  /var/spool	Application spool data
  /var/tmp	Temporary files preserved between system reboots
  

Several directories are 'reserved' in the sense that they must not be used arbitrarily by some new application, since they would conflict with historical and/or local practice. They are:

  /var/backups
  /var/cron
  /var/msgs
  /var/preserve
  

The following directories, or symbolic links to directories, must be in /var, if the corresponding subsystem is installed:

  account   Process accounting logs (optional)
  crash     System crash dumps (optional)
  games     Variable game data (optional)
  mail      User mailbox files (optional)
  yp        Network Information Service (NIS) database files (optional)



출처 - http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/var.html





'System > Linux' 카테고리의 다른 글

linux - rc.local  (0) 2014.03.02
linux - sudo  (0) 2014.02.23
linux - install problem wine in fedora 17  (0) 2014.02.16
linux - grub2 config  (0) 2014.02.15
linux - rsh set up in fedora  (0) 2013.12.08
Posted by linuxism
,

mongodb - backup

DB/MongoDB 2014. 2. 23. 16:03


MongoDB Manual 2.4

MongoDB Backup Methods

When deploying MongoDB in production, you should have a strategy for capturing and restoring backups in the case of data loss events. MongoDB provides backup methods to support different requirements and configurations:

Backup Methods

Backups with the MongoDB Management Service (MMS)

The MongoDB Management Service supports backup and restore for MongoDB deployments.

MMS continually backs up MongoDB replica sets and sharded systems by reading the oplog data from your MongoDB cluster.

MMS Backup offers point in time recovery of MongoDB replica sets and a consistent snapshot of sharded systems.

MMS achieves point in time recovery by storing oplog data so that it can create a restore for any moment in time in the last 24 hours for a particular replica set.

For sharded systems, MMS does not provide restores for arbitrary moments in time. MMS does provide periodic consistent snapshots of the entire sharded cluster. Sharded cluster snapshots are difficult to achieve with other MongoDB backup methods.

To restore a MongoDB cluster from an MMS Backup snapshot, you download a compressed archive of your MongoDB data files and distribute those files before restarting the mongod processes.

To get started with MMS Backup sign up for MMS, and consider the complete documentation of MMS see the MMS Manual.

Backup by Copying Underlying Data Files

You can create a backup by copying MongoDB’s underlying data files.

If the volume where MongoDB stores data files supports point in time snapshots, you can use these snapshots to create backups of a MongoDB system at an exact moment in time.

File systems snapshots are an operating system volume manager feature, and are not specific to MongoDB. The mechanics of snapshots depend on the underlying storage system. For example, if you use Amazon’s EBS storage system for EC2 supports snapshots. On Linux the LVM manager can create a snapshot.

To get a correct snapshot of a running mongod process, you must have journaling enabled and the journal must reside on the same logical volume as the other MongoDB data files. Without journaling enabled, there is no guarantee that the snapshot will be consistent or valid.

To get a consistent snapshot of a sharded system, you must disable the balancer and capture a snapshot from every shard and a config server at approximately the same moment in time.

If your storage system does not support snapshots, you can copy the files directly using cprsync, or a similar tool. Since copying multiple files is not an atomic operation, you must stop all writes to the mongod before copying the files. Otherwise, you will copy the files in an invalid state.

Backups produced by copying the underlying data do not support point in time recovery for replica sets and are difficult to manage for larger sharded clusters. Additionally, these backups are larger because they include the indexes and duplicate underlying storage padding and fragmentation. mongodump by contrast create smaller backups.

For more information, see Backup and Restore with Filesystem Snapshots and Backup a Sharded Cluster with Filesystem Snapshots documents for complete instructions on using LVM to create snapshots. Also see Back up and Restore Processes for MongoDB on Amazon EC2.

Backup with mongodump

The mongodump tool reads data from a MongoDB database and creates high fidelity BSON files. The mongorestore tool can populate a MongoDB database with the data from these BSON files. These tools are simple and efficient for backing up small MongoDB deployments, but are not ideal for capturing backups of larger systems.

mongodump and mongorestore can operate against a running mongod process, and can manipulate the underlying data files directly. By default, mongodump does not capture the contents of the local database.

mongodump only captures the documents in the database. The resulting backup is space efficient, but mongorestore ormongod must rebuild the indexes after restoring data.

When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory.

To mitigate the impact of mongodump on the performance of the replica set, use mongodump to capture backups from asecondary member of a replica set. Alternatively, you can shut down a secondary and use mongodump with the data files directly. If you shut down a secondary to capture data with mongodump ensure that the operation can complete before its oplog becomes too stale to continue replicating.

For replica sets, mongodump also supports a point in time feature with the --oplog option. Applications may continue modifying data while mongodump captures the output. To restore a point in time backup created with --oplog, use mongorestore with the--oplogReplay option.

If applications modify data while mongodump is creating a backup, mongodump will compete for resources with those applications.

See Back Up and Restore with MongoDB ToolsBackup a Small Sharded Cluster with mongodump, and Backup a Sharded Cluster with Database Dumps for more information.

Further Reading

Backup and Restore with Filesystem Snapshots
An outline of procedures for creating MongoDB data set backups using system-level file snapshot tool, such as LVM or native storage appliance tools.
Restore a Replica Set from MongoDB Backups
Describes procedure for restoring a replica set from an archived backup such as a mongodump or MMS Backup file.
Back Up and Restore with MongoDB Tools
The procedure for writing the contents of a database to a BSON (i.e. binary) dump file for backing up MongoDB databases.
Backup and Restore Sharded Clusters
Detailed procedures and considerations for backing up sharded clusters and single shards.
Recover Data after an Unexpected Shutdown
Recover data from MongoDB data files that were not properly closed or have an invalid state.


출처 - http://docs.mongodb.org/manual/core/backups/








Back Up and Restore with MongoDB Tools

This document describes the process for writing and restoring backups to files in binary format with the mongodump andmongorestore tools.

Use these tools for backups if other backup methods, such as the MMS Backup Service or file system snapshots are unavailable.

Backup a Database with mongodump

Important

 

mongodump does not dump the content of the local database.

Basic mongodump Operations

The mongodump utility can back up data by either:

  • connecting to a running mongod or mongos instance, or
  • accessing data files without an active instance.

The utility can create a backup for an entire server, database or collection, or can use a query to backup just part of a collection.

When you run mongodump without any arguments, the command connects to the MongoDB instance on the local system (e.g.127.0.0.1 or localhost) on port 27017 and creates a database backup named dump/ in the current directory.

To backup data from a mongod or mongos instance running on the same machine and on the default port of 27017 use the following command:

mongodump

Warning

 

The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores.

To limit the amount of data included in the database dump, you can specify --db and --collection as options to the mongodumpcommand. For example:

mongodump --dbpath /data/db/ --out /data/backup/
mongodump --host mongodb.example.net --port 27017

mongodump will write BSON files that hold a copy of data accessible via the mongod listening on port 27017 of themongodb.example.net host.

mongodump --collection collection --db test

This command creates a dump of the collection named collection from the database test in a dump/ subdirectory of the current working directory.

Point in Time Operation Using Oplogs

Use the --oplog option with mongodump to collect the oplog entries to build a point-in-time snapshot of a database within a replica set. With --oplogmongodump copies all the data from the source database as well as all of the oplog entries from the beginning of the backup procedure to until the backup procedure completes. This backup procedure, in conjunction withmongorestore --oplogReplay, allows you to restore a backup that reflects the specific moment in time that corresponds to whenmongodump completed creating the dump file.

Create Backups Without a Running mongod Instance

If your MongoDB instance is not running, you can use the --dbpath option to specify the location to your MongoDB instance’s database files. mongodump reads from the data files directly with this operation. This locks the data directory to prevent conflicting writes. The mongod process must not be running or attached to these data files when you run mongodump in this configuration. Consider the following example:

Example

 

Backup a MongoDB Instance Without a Running mongod

Given a MongoDB instance that contains the customersproducts, and suppliers databases, the followingmongodump operation backs up the databases using the --dbpath option, which specifies the location of the database files on the host:

mongodump --dbpath /data -o dataout

The --out option allows you to specify the directory where mongodump will save the backup. mongodump creates a separate backup directory for each of the backed up databases: dataout/customersdataout/products, anddataout/suppliers.

Create Backups from Non-Local mongod Instances

The --host and --port options for mongodump allow you to connect to and backup from a remote host. Consider the following example:

mongodump --host mongodb1.example.net --port 3017 --username user --password pass --out /opt/backup/mongodump-2012-10-24

On any mongodump command you may, as above, specify username and password credentials to specify database authentication.

Restore a Database with mongorestore

The mongorestore utility restores a binary backup created by mongodump. By default, mongorestore looks for a database backup in the dump/ directory.

The mongorestore utility can restore data either by:

  • connecting to a running mongod or mongos directly, or
  • writing to a set of MongoDB data files without use of a running mongod.

mongorestore can restore either an entire database backup or a subset of the backup.

To use mongorestore to connect to an active mongod or mongos, use a command with the following prototype form:

mongorestore --port <port number> <path to the backup>

To use mongorestore to write to data files without using a running mongod, use a command with the following prototype form:

mongorestore --dbpath <database path> <path to the backup>

Consider the following example:

mongorestore dump-2012-10-25/

Here, mongorestore imports the database backup in the dump-2012-10-25 directory to the mongod instance running on the localhost interface.

Restore Point in Time Oplog Backup

If you created your database dump using the --oplog option to ensure a point-in-time snapshot, call mongorestore with the --oplogReplay option, as in the following example:

mongorestore --oplogReplay

You may also consider using the mongorestore --objcheck option to check the integrity of objects while inserting them into the database, or you may consider the mongorestore --drop option to drop each collection from the database before restoring from backups.

Restore a Subset of data from a Binary Database Dump

mongorestore also includes the ability to a filter to all input before inserting it into the new database. Consider the following example:

mongorestore --filter '{"field": 1}'

Here, mongorestore only adds documents to the database from the dump located in the dump/ folder if the documents have a field name field that holds a value of 1. Enclose the filter in single quotes (e.g. ') to prevent the filter from interacting with your shell environment.

Restore Without a Running mongod

mongorestore can write data to MongoDB data files without needing to connect to a mongod directly.

Example

 

Restore a Database Without a Running mongod

Given a set of backed up databases in the /data/backup/ directory:

  • /data/backup/customers,
  • /data/backup/products, and
  • /data/backup/suppliers

The following mongorestore command restores the products database. The command uses the --dbpath option to specify the path to the MongoDB data files:

mongorestore --dbpath /data/db --journal /data/backup/products

The mongorestore imports the database backup in the /data/backup/products directory to the mongod instance that runs on the localhost interface. The mongorestore operation imports the backup even if the mongod is not running.

The --journal option ensures that mongorestore records all operation in the durability journal. The journal prevents data file corruption if anything (e.g. power failure, disk failure, etc.) interrupts the restore operation.

See also

 

mongodump and mongorestore.

Restore Backups to Non-Local mongod Instances

By default, mongorestore connects to a MongoDB instance running on the localhost interface (e.g. 127.0.0.1) and on the default port (27017). If you want to restore to a different host or port, use the --host and --port options.

Consider the following example:

mongorestore --host mongodb1.example.net --port 3017 --username user --password pass /opt/backup/mongodump-2012-10-24

As above, you may specify username and password connections if your mongod requires authentication.



출처 - http://docs.mongodb.org/manual/tutorial/back-up-and-restore-with-mongodb-tools/



mongodump

Synopsis

mongodump is a utility for creating a binary export of the contents of a database. Consider using this utility as part an effective backup strategy. Use mongodump in conjunction withmongorestore to restore databases.

mongodump can read data from either mongod or mongos instances, in addition to reading directly from MongoDB data files without an active mongod.

Important

 

mongodump does not dump the content of the local database.

Warning

 

The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores.

Options

mongodump
--help

Returns a basic help and usage text.

--verbose-v

Increases the amount of internal reporting returned on the command line. Increase the verbosity with the -v form by including the option multiple times, (e.g. -vvvvv.)

--version

Returns the version of the mongodump utility and exits.

--host <hostname><:port>

Specifies a resolvable hostname for the mongod that you wish to use to create the database dump. By default mongodump will attempt to connect to a MongoDB process running on the localhost port number 27017.

Optionally, specify a port number to connect a MongoDB instance running on a port other than 27017.

To connect to a replica set, use the --host argument with a setname, followed by a slash and a comma-separated list of host names and port numbers. The mongodump utility will, given the seed of at least one connected set member, connect to the primary member of that set. This option would resemble:

mongodump --host repl0/mongo0.example.net,mongo0.example.net:27018,mongo1.example.net,mongo2.example.net

You can always connect directly to a single MongoDB instance by specifying the host and port number directly.

--port <port>

Specifies the port number, if the MongoDB instance is not running on the standard port. (i.e. 27017) You may also specify a port number using the --host option.

--ipv6

Enables IPv6 support that allows mongodump to connect to the MongoDB instance using an IPv6 network. All MongoDB programs and processes, including mongodump, disable IPv6 support by default.

--ssl

New in version 2.4: MongoDB added support for SSL connections to mongod instances in mongodump.

Note

 

SSL support in mongodump is not compiled into the default distribution of MongoDB. See Connect to MongoDB with SSL for more information on SSL and MongoDB.

Additionally, mongodump does not support connections to mongod instances that require client certificate validation.

Allows mongodump to connect to mongod instance over an SSL connection.

--username <username>-u <username>

Specifies a username to authenticate to the MongoDB instance, if your database requires authentication. Use in conjunction with the --password option to supply a password.

--password <password>-p <password>

Specifies a password to authenticate to the MongoDB instance. Use in conjunction with the --username option to supply a username.

If you specify a --username and do not pass an argument to --passwordmongodump will prompt for a password interactively. If you do not specify a password on the command line, --password must be the last argument specified.

--authenticationDatabase <dbname>

New in version 2.4.

Specifies the database that holds the user’s (e.g --username) credentials.

By default, mongodump assumes that the database specified to the --db argument holds the user’s credentials, unless you specify --authenticationDatabase.

See userSourcesystem.users Privilege Documents and User Privilege Roles in MongoDB for more information about delegated authentication in MongoDB.

--authenticationMechanism <name>

New in version 2.4.

Specifies the authentication mechanism. By default, the authentication mechanism is MONGODB-CR, which is the MongoDB challenge/response authentication mechanism. In MongoDB Enterprise, mongodump also includes support for GSSAPI to handle Kerberos authentication.

See Deploy MongoDB with Kerberos Authentication for more information about Kerberos authentication.

--dbpath <path>

Specifies the directory of the MongoDB data files. If used, the --dbpath option enables mongodump to attach directly to local data files and copy the data without the mongod. To run with --dbpathmongodump needs to restrict access to the data directory: as a result, no mongod can access the same path while the process runs.

--directoryperdb

Use the --directoryperdb in conjunction with the corresponding option to mongod. This option allows mongodump to read data files organized with each database located in a distinct directory. This option is only relevant when specifying the --dbpathoption.

--journal

Allows mongodump operations to use the durability journal to ensure that the export is in a valid state. This option is only relevant when specifying the --dbpath option.

--db <db>-d <db>

Use the --db option to specify a database for mongodump to backup. If you do not specify a DB, mongodump copies all databases in this instance into the dump files. Use this option to backup or copy a smaller subset of your data.

--collection <collection>-c <collection>

Use the --collection option to specify a collection for mongodump to backup. If you do not specify a collection, this option copies all collections in the specified database or instance to the dump files. Use this option to backup or copy a smaller subset of your data.

--out <path>-o <path>

Specifies a directory where mongodump saves the output of the database dump. By default, mongodump saves output files in a directory named dump in the current working directory.

To send the database dump to standard output, specify “-” instead of a path. Write to standard output if you want process the output before saving it, such as to use gzip to compress the dump. When writing standard output, mongodump does not write the metadata that writes in a <dbname>.metadata.json file when writing to files directly.

--query <json>-q <json>

Provides a query to limit (optionally) the documents included in the output of mongodump.

--oplog

Use this option to ensure that mongodump creates a dump of the database that includes a partial oplog containing operations from the duration of the mongodump operation. This oplog produces an effective point-in-time snapshot of the state of amongod instance. To restore to a specific point-in-time backup, use the output created with this option in conjunction withmongorestore --oplogReplay.

Without --oplog, if there are write operations during the dump operation, the dump will not reflect a single moment in time. Changes made to the database during the update process can affect the output of the backup.

--oplog has no effect when running mongodump against a mongos instance to dump the entire contents of a sharded cluster. However, you can use --oplog to dump individual shards.

Note

 

--oplog only works against nodes that maintain an oplog. This includes all members of a replica set, as well asmaster nodes in master/slave replication deployments.

--oplog does not dump the oplog collection.

--repair

Use this option to run a repair option in addition to dumping the database. The repair option attempts to repair a database that may be in an invalid state as a result of an improper shutdown or mongod crash.

Note

 

The --repair option uses aggressive data-recovery algorithms that may produce a large amount of duplication.

--forceTableScan

Forces mongodump to scan the data store directly: typically, mongodump saves entries as they appear in the index of the _idfield. Use --forceTableScan to skip the index and scan the data directly. Typically there are two cases where this behavior is preferable to the default:

  1. If you have key sizes over 800 bytes that would not be present in the _id index.
  2. Your database uses a custom _id field.

When you run with --forceTableScanmongodump does not use $snapshot. As a result, the dump produced by mongodumpcan reflect the state of the database at many different points in time.

Important

 

Use --forceTableScan with extreme caution and consideration.

Behavior

When running mongodump against a mongos instance where the sharded cluster consists of replica sets, the read preferenceof the operation will prefer reads from secondary members of the set.

Warning

Changed in version 2.2: When used in combination with fsync or db.fsyncLock()mongod may block some reads, including those from mongodump, when queued write operation waits behind the fsync lock.

Required User Privileges

Note

 

User privileges changed in MongoDB 2.4.

The user must have appropriate privileges to read data from database holding collections in order to use mongodump. Consider the following required privileges for the following mongodump operations:

TaskRequired Privileges
All collections in a database except system.users.read[1]
All collections in a database, including system.users.read [1] and userAdmin.
All databases. [3]readAnyDatabaseuserAdminAnyDatabase, andclusterAdmin[2]

See User Privilege Roles in MongoDB and system.users Privilege Documents for more information on user roles.

[1](12) You may provision readWrite instead of read.
[2]clusterAdmin provides the ability to run the listDatabases command, to list all existing databases.
[3]If any database runs with profiling enabled, mongodump may need the dbAdminAnyDatabase privilege to dump thesystem.profile collection.

Usage

See the Back Up and Restore with MongoDB Tools for a larger overview of mongodump usage. Also see the mongorestoredocument for an overview of the mongorestore, which provides the related inverse functionality.

The following command creates a dump file that contains only the collection named collection in the database named test. In this case the database is running on the local interface on port 27017:

mongodump --collection collection --db test

In the next example, mongodump creates a backup of the database instance stored in the /srv/mongodb directory on the local machine. This requires that no mongod instance is using the /srv/mongodb directory.

mongodump --dbpath /srv/mongodb

In the final example, mongodump creates a database dump located at /opt/backup/mongodump-2011-10-24, from a database running on port 37017 on the host mongodb1.example.net and authenticating using the username user and the password pass, as follows:

mongodump --host mongodb1.example.net --port 37017 --username user --password pass --out /opt/backup/mongodump-2011-10-24


mongorestore

Synopsis

The mongorestore program writes data from a binary database dump created by mongodump to a MongoDB instance.mongorestore can create a new database or add data to an existing database.

mongorestore can write data to either mongod or mongos instances, in addition to writing directly to MongoDB data files without an active mongod.

If you restore to an existing database, mongorestore will only insert into the existing database, and does not perform updates of any kind. If existing documents have the same value _id field in the target database and collection, mongorestore will notoverwrite those documents.

Remember the following properties of mongorestore behavior:

  • mongorestore recreates indexes recorded by mongodump.

  • all operations are inserts, not updates.

  • mongorestore does not wait for a response from a mongod to ensure that the MongoDB process has received or recorded the operation.

    The mongod will record any errors to its log that occur during a restore operation, but mongorestore will not receive errors.

Warning

 

The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores.

Options

mongorestore
--help

Returns a basic help and usage text.

--verbose-v

Increases the amount of internal reporting returned on the command line. Increase the verbosity with the -v form by including the option multiple times (e.g. -vvvvv).

--version

Returns the version of the mongorestore tool.

--host <hostname><:port>

Specifies a resolvable hostname for the mongod to which you want to restore the database. By default mongorestore will attempt to connect to a MongoDB process running on the localhost port number 27017. For an example of --host, see Restore a Database with mongorestore.

Optionally, specify a port number to connect a MongoDB instance running on a port other than 27017.

To connect to a replica set, you can specify the replica set seed name, and a seed list of set members, in the following format:

<replica_set_name>/<hostname1><:port>,<hostname2:<port>,...
--port <port>

Specifies the port number, if the MongoDB instance is not running on the standard port (i.e. 27017). You may also specify a port number using the --host command. For an example of --port, see Restore a Database with mongorestore.

--ipv6

Enables IPv6 support that allows mongorestore to connect to the MongoDB instance using an IPv6 network. All MongoDB programs and processes, including mongorestore, disable IPv6 support by default.

--ssl

New in version 2.4: MongoDB added support for SSL connections to mongod instances in mongorestore.

Note

 

SSL support in mongorestore is not compiled into the default distribution of MongoDB. See Connect to MongoDB with SSL for more information on SSL and MongoDB.

Additionally, mongorestore does not support connections to mongod instances that require client certificate validation.

Allows mongorestore to connect to mongod instance over an SSL connection.

--username <username>-u <username>

Specifies a username to authenticate to the MongoDB instance, if your database requires authentication. Use in conjunction with the --password option to supply a password. For an example of --username, see Restore a Database with mongorestore.

--password <password>-p <password>

Specifies a password to authenticate to the MongoDB instance. Use in conjunction with the --username option to supply a username. For an example of --password, see Restore a Database with mongorestore.

If you specify a --username and do not pass an argument to --passwordmongorestore will prompt for a password interactively. If you do not specify a password on the command line, --password must be the last argument specified.

--authenticationDatabase <dbname>

New in version 2.4.

Specifies the database that holds the user’s (e.g --username) credentials.

By default, mongorestore assumes that the database specified to the --db argument holds the user’s credentials, unless you specify --authenticationDatabase.

See userSourcesystem.users Privilege Documents and User Privilege Roles in MongoDB for more information about delegated authentication in MongoDB.

--authenticationMechanism <name>

New in version 2.4.

Specifies the authentication mechanism. By default, the authentication mechanism is MONGODB-CR, which is the MongoDB challenge/response authentication mechanism. In MongoDB Enterprise, mongorestore also includes support for GSSAPI to handle Kerberos authentication.

See Deploy MongoDB with Kerberos Authentication for more information about Kerberos authentication.

--dbpath <path>

Specifies the directory of the MongoDB data files. If used, the --dbpath option enables mongorestore to attach directly to local data files and insert the data without the mongod. To run with --dbpathmongorestore needs to lock access to the data directory: as a result, no mongod can access the same path while the process runs. For an example of --dbpath, see Restore Without a Running mongod.

--directoryperdb

Use the --directoryperdb in conjunction with the corresponding option to mongod, which allows mongorestore to import data into MongoDB instances that have every database’s files saved in discrete directories on the disk. This option is only relevant when specifying the --dbpath option.

--journal

Allows mongorestore to write to the durability journal to ensure that the data files will remain valid during the write process. This option is only relevant when specifying the --dbpath option. For an example of --journal, see Restore Without a Running mongod.

--db <db>-d <db>

Use the --db option to specify a database for mongorestore to restore data into. If the database doesn’t exist, mongorestorewill create the specified database. If you do not specify a <db>mongorestore creates new databases that correspond to the databases where data originated and data may be overwritten. Use this option to restore data into a MongoDB instance that already has data.

--db does not control which BSON files mongorestore restores. You must use the mongorestore path option to limit that restored data.

--collection <collection>-c <collection>

Use the --collection option to specify a collection for mongorestore to restore. If you do not specify a <collection>,mongorestore imports all collections created. Existing data may be overwritten. Use this option to restore data into a MongoDB instance that already has data, or to restore only some data in the specified imported data set.

--objcheck

Forces the mongorestore to validate all requests from clients upon receipt to ensure that clients never insert invalid documents into the database. For objects with a high degree of sub-document nesting, --objcheck can have a small impact on performance. You can set --noobjcheck to disable object checking at run-time.

Changed in version 2.4: MongoDB enables --objcheck by default, to prevent any client from inserting malformed or invalid BSON into a MongoDB database.

--noobjcheck

New in version 2.4.

Disables the default document validation that MongoDB performs on all incoming BSON documents.

--filter '<JSON>'

Limits the documents that mongorestore imports to only those documents that match the JSON document specified as'<JSON>'. Be sure to include the document in single quotes to avoid interaction with your system’s shell environment. For an example of --filter, see Restore a Subset of data from a Binary Database Dump.

--drop

Modifies the restoration procedure to drop every collection from the target database before restoring the collection from the dumped backup.

--oplogReplay

Replays the oplog after restoring the dump to ensure that the current state of the database reflects the point-in-time backup captured with the “mongodump --oplog” command. For an example of --oplogReplay, see Restore Point in Time Oplog Backup.

--keepIndexVersion

Prevents mongorestore from upgrading the index to the latest version during the restoration process.

--w <number of replicas per write>

New in version 2.2.

Specifies the write concern for each write operation that mongorestore writes to the target database. By default,mongorestore does not wait for a response for write acknowledgment.

--noOptionsRestore

New in version 2.2.

Prevents mongorestore from setting the collection options, such as those specified by the collMod database command, on restored collections.

--noIndexRestore

New in version 2.2.

Prevents mongorestore from restoring and building indexes as specified in the corresponding mongodump output.

--oplogLimit <timestamp>

New in version 2.2.

Prevents mongorestore from applying oplog entries newer than the <timestamp>. Specify <timestamp> values in the form of <time_t>:<ordinal>, where <time_t> is the seconds since the UNIX epoch, and <ordinal> represents a counter of operations in the oplog that occurred in the specified second.

You must use --oplogLimit in conjunction with the --oplogReplay option.

<path>

The final argument of the mongorestore command is a directory path. This argument specifies the location of the database dump from which to restore.

Usage

See Back Up and Restore with MongoDB Tools for a larger overview of mongorestore usage. Also see the mongodumpdocument for an overview of the mongodump, which provides the related inverse functionality.

Consider the following example:

mongorestore --collection people --db accounts dump/accounts/people.bson

Here, mongorestore reads the database dump in the dump/ sub-directory of the current directory, and restores only the documents in the collection named people from the database named accountsmongorestore restores data to the instance running on the localhost interface on port 27017.

In the next example, mongorestore restores a backup of the database instance located in dump to a database instance stored in the /srv/mongodb on the local machine. This requires that there are no active mongod instances attached to/srv/mongodb data directory.

mongorestore --dbpath /srv/mongodb

In the final example, mongorestore restores a database dump located at /opt/backup/mongodump-2011-10-24, to a database running on port 37017 on the host mongodb1.example.net. The mongorestore command authenticates to the MongoDB instance using the username user and the password pass, as follows:

mongorestore --host mongodb1.example.net --port 37017 --username user --password pass /opt/backup/mongodump-2011-10-24


출처 - http://docs.mongodb.org/manual/reference/program/mongorestore/








Posted by linuxism
,