HOWTO: Enable and control the gnome VNC vino-server from an SSH connection
NOTE: At long last here's the updated method. This was tested between two Ubuntu 10.10 Maverick hosts. Thanks to all the contributors to this thread, especially the posts by frafu and InkyDinky
user@localbox:~$ ssh -Y user@remotebox user@remotebox:~$ vino-preferences # check settings and hit close button user@remotebox:~$ sudo -s root@remotebox:~# export DISPLAY=:0.0 root@remotebox:~# xhost + root@remotebox:~# /usr/lib/vino/vino-server & # to start the vino server root@remotebox:~# netstat -nl | grep 5900 # check to make sure vino server is listening on port 5900 exit or CTRL-D twice to close SSH session to remotebox
user@localbox:~$ ssh -L 5900:localhost:5900 user@remotebox # establish a new SSH connection to remotebox w/forwarded VNC port # launch Remote Desktop Viewer (vinagre) under Applications => Internet and connect to localhost
Java Code Examples for org.springframework.data.mongodb.core.convert.MappingMongoConverter
The following are top voted examples for showing how to useorg.springframework.data.mongodb.core.convert.MappingMongoConverter. These examples are extracted from open source projects. You can vote up the examples you like and your votes will be used in our system to product more good examples.
Last month I finally found some time to play around with a NoSQL database. Getting hands on experience with a NoSQL database has been on my list for quite some time, but due to busy times at work I was unable to find the energy to get things going.
A LITTLE BACKGROUND INFORMATION
Most of you have probably have heard the term NoSQL before. The term is used in situations where you do not have a traditional relation database for storing information. There are many different sorts of NoSQL databases. To make a small summary these are probably the most well-known:
The above types cover most of the differences, but for each type there are a lot of different implementations. For a better overview you might want to take a look at the NOSQL database website.
For my own experiment I chose to use MongoDB, since I had read a lot about it and it seemed quite easy to get started with.
MongoDB is as they describe it on their website:
A scalable, high-performance, open source, document-oriented database.
The document-oriented aspect was one of the reasons why I chose MongoDB to start with. It allows you to store rich content with data structures inside your datastore.
GETTING STARTED WITH MONGODB
To begin with, I looked at the Quick start page for Mac OS X and I recommend you to do that too (unless you use a different OS). It will get you going and within a couple of minutes you'll have MongoDB up and running on your local machine.
MongoDB stores it's data by default in a certain location. Of course you can configure that, so I started MongoDB with the --dbpath parameter. This parameter will allow you to specificy your own storage location. It will look something like this:
If you do that you eventually will get a message saying:
Mon Jul 18 22:19:58 [initandlisten] waiting for connections on port 27017 Mon Jul 18 22:19:58 [websvr] web admin interface listening on port 28017
At this point MongoDB is running and we can proceed to the next step: using Spring Data to interact with MongoDB.
GETTING STARTED WITH SPRING DATA
The primary goal of the Spring Data project is to make it easier for developers to work with (No)SQL databases. The Spring Data project already has support for a number of the above mentioned NoSQL type of databases. Since we're now using MongoDB, there is a specific sub project that handles MongoDB interaction. To be able to use this in our project we first need to add a Maven dependency to our pom.xml.
Looks easy right? Just one single Maven dependency. Of course in the end the spring-data-mongodb artifact depends on other artifacts which it will bring into your project. In this post I used version 1.0.2.RELEASE. Now on to some Java code!
For my first experiment I used a simple Person domain object that I'm going to query and persist inside the database. The Person class is quite simple and looks as follows.
return"Person [id="+ personId + ", name="+ name + ", age="+ age + ", home town="+ homeTown + "]";
}
}
Now if you look at the class more closely you will see some Spring Data specific annotations like @Id and@Document . The @Document annotation identifies a domain object that is going to be persisted to MongoDB. Now that we have a persistable domain object we can move on to the real interaction.
For easy connectivity with MongoDB we can make use of Spring Data's MongoTemplate class. Here is a simple PersonRepository object that handles all 'Person' related interaction with MongoDB by means of the MongoTemplate.
If you look at the above code you will see the MongoTemplate in action. There is quite a long list of method calls which you can use for inserting, querying and so on. The MongoTemplate in this case is @Autowiredfrom the Spring configuration, so let's have a look at the configuration.
The MongoTemplate is configured with a reference to a MongoDBFactoryBean (which handles the actual database connectivity) and is setup with a database name used for this example.
Now that we have all components in place, let's get something in and out of MongoDB.
All this application does for now is setup a connection with MongoDB, insert 20 persons (documents), fetch them all and write the information to the log. As a first experiment this was quite fun to do.
CONCLUSION
As you can see with Spring Data it's quite easy to get some basic functionality within only a couple of minutes. All the sources mentioned above and a working project can be found on GitHub. It was a fun first experiment and I already started working on a bit more advanced project, which combines Spring Data, MongoDB, HTML5 and CSS3. It will be on GitHub shortly together with another blog post here so be sure to come back.
Iam creating mongo db template in application context and annotate so it will auto inject into Contact Repository . Iam creating Contact Repository to save Contact Object into Mongo Table.
Before insert into database , iam checking that collection available for entity name otherwise iam creating it .
Its not necessary that each entity type has one collection . One collection have multiple type of entity .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/*
* To change this template, choose Tools | Templates
if (!mongoTemplate.collectionExists(Contact.class)) {
mongoTemplate.createCollection(Contact.class);
}
}
}
Two way i can insert either use save or insert . Save we can use collection name as parameter . Insert its use Entity class name for insert . Following are my pojo for contact. @Document is base anotation. and @Id is id field .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
/*
* To change this template, choose Tools | Templates
<!-- Resolves view names to protected .jsp resources within the /WEB-INF/views directory -->
<bean>
<property name="prefix" value="/WEB-INF/"/>
<property name="suffix" value=".jsp"/>
</bean>
</beans>
Iam using resources to keep my commong css and js files . All other settings are commong for Spring MVC 3 setups. Nothing fancy . next is controller code.
I am using /basic as url , to display and update contact details .
When post the /basic controller from front end , the contact Pojo already updated with value .
When calling insertContact method , it will insert the data into mongo db.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
/*
* To change this template, choose Tools | Templates
<h1>Its for saving Employee details back to MongoDB/h1>
<table>
<tr>
<td> <div> Your name </div></td>
<td>
<form:input path="name"/>
</br></td>
</tr>
<tr>
<td> <div> Your email account </div></td>
<td>
<form:input path="email"/>
</br></td>
</tr>
<tr>
<td>
<div> Your mobile phone </div></td>
<td>
<form:input path="phone"/>
</td>
</tr>
<tr>
<td><div> Your Department </div></td>
<td> <form:input path="department"/>
</td>
</tr>
<tr>
<td><div> Designation </div></td>
<td> <form:input path="designation"/>
</td>
</tr>
</table>
<button>Click here to save</button>
</form:form>
</button>
</body>
</html>
Iam using jquery library . but i keep the library in the resources folder . That functionality is new in spring MVC . Now time to deploy the application in tomcat Build the application and deploy in the tomcat using netbeans build it tomcat 7. The front page looks like as follows and click on green button to save the data .
Once save the data , iam looking on shell to verify the data. As we are create collection based on entity type , the collection name should be contact.
1
2
3
4
5
public void createPersonCollection() {
if (!mongoTemplate.collectionExists(Contact.class)) {
In this tutorial, we show you how to use “SpringData for MongoDB” framework, to perform CRUD operations in MongoDB, via Spring’s annotation and XML schema.
Updated on 1/04/2013 Article is updated to use latest SpringData v 1.2.0.RELEASE, it was v1.0.0.M2.
Tools and technologies used :
Spring Data MongoDB – 1.2.0.RELEASE
Spring Core – 3.2.2.RELEASE
Java Mongo Driver – 2.11.0
Eclipse – 4.2
JDK – 1.6
Maven – 3.0.3
P.S Spring Data requires JDK 6.0 and above, and Spring Framework 3.0.x and above.
1. Project Structure
A classic Maven’s style Java project directory structure.
2. Dependency
The following libraries are required :
spring-data-mongodb Currently, the “spring-data-mongodb” jar is only available in “http://maven.springframework.org/milestone“, so, you have to declare this repository also.
Updated on 13/09/2012 spring-data-mongodb is available at the Maven central repository, Spring repository is no longer required.
pom.xml
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.mkyong.core</groupId><artifactId>SpringMongoDBExample</artifactId><packaging>jar</packaging><version>1.0</version><name>SpringMongoExample</name><url>http://maven.apache.org</url><dependencies><!-- Spring framework --><dependency><groupId>org.springframework</groupId><artifactId>spring-core</artifactId><version>3.2.2.RELEASE</version></dependency><dependency><groupId>org.springframework</groupId><artifactId>spring-context</artifactId><version>3.2.2.RELEASE</version></dependency><!-- mongodb java driver --><dependency><groupId>org.mongodb</groupId><artifactId>mongo-java-driver</artifactId><version>2.11.0</version></dependency><!-- Spring data mongodb --><dependency><groupId>org.springframework.data</groupId><artifactId>spring-data-mongodb</artifactId><version>1.2.0.RELEASE</version></dependency><dependency><groupId>cglib</groupId><artifactId>cglib</artifactId><version>2.2.2</version></dependency></dependencies><build><plugins><plugin><artifactId>maven-compiler-plugin</artifactId><version>3.0</version><configuration><source>1.6</source><target>1.6</target></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-eclipse-plugin</artifactId><version>2.9</version><configuration><downloadSources>true</downloadSources><downloadJavadocs>true</downloadJavadocs></configuration></plugin></plugins></build></project>
3. Spring Configuration, Annotation and XML
Here, we show you two ways to configure Spring data and connect to MongoDB, via annotation and XML schema.
3.1 Annotation Extends the AbstractMongoConfiguration is the fastest way, it helps to configure everything you need to start, likemongoTemplate object.
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public User(String username, String password) {
super();
this.username = username;
this.password = password;
}
@Override
public String toString() {
return "User [id=" + id + ", username=" + username + ", password=" + password + "]";
}
}
5. Demo – CRUD Operations
Full example to show you how to use Spring data to perform CRUD operations in MongoDB. The Spring data APIs are quite clean and should be self-explanatory.
App.java
packagecom.mkyong.core;importjava.util.List;importorg.springframework.context.ApplicationContext;importorg.springframework.context.annotation.AnnotationConfigApplicationContext;importorg.springframework.data.mongodb.core.MongoOperations;importorg.springframework.data.mongodb.core.query.Criteria;importorg.springframework.data.mongodb.core.query.Query;importorg.springframework.data.mongodb.core.query.Update;importcom.mkyong.config.SpringMongoConfig;importcom.mkyong.model.User;//import org.springframework.context.support.GenericXmlApplicationContext;publicclass App {publicstaticvoid main(String[] args){// For XML//ApplicationContext ctx = new GenericXmlApplicationContext("SpringConfig.xml");// For Annotation
ApplicationContext ctx =new AnnotationConfigApplicationContext(SpringMongoConfig.class);
MongoOperations mongoOperation =(MongoOperations) ctx.getBean("mongoTemplate");
User user =new User("mkyong", "password123");// save
mongoOperation.save(user);// now user object got the created id.System.out.println("1. user : "+ user);// query to search user
Query searchUserQuery =new Query(Criteria.where("username").is("mkyong"));// find the saved user again.
User savedUser = mongoOperation.findOne(searchUserQuery, User.class);System.out.println("2. find - savedUser : "+ savedUser);// update password
mongoOperation.updateFirst(searchUserQuery,
Update.update("password", "new password"),User.class);// find the updated user object
User updatedUser = mongoOperation.findOne(searchUserQuery, User.class);System.out.println("3. updatedUser : "+ updatedUser);// delete
mongoOperation.remove(searchUserQuery, User.class);// List, it should be empty now.
List<User> listUser = mongoOperation.findAll(User.class);System.out.println("4. Number of user = "+ listUser.size());}}
Output
1. user : User [id=516627653004953049d9ddf0, username=mkyong, password=password123]2. find - savedUser : User [id=516627653004953049d9ddf0, username=mkyong, password=password123]3. updatedUser : User [id=516627653004953049d9ddf0, username=mkyong, password=new password]4. Number of user = 0
Sometime ago I had Blogged about using Morphia with Mongo DB. Since then I have come across the Spring Data project and wanted to take their API for Mongo on a ride. So this BLOG is duplicating the functionality of what was present in the Morphia one with the difference that it uses Spring Data and demonstrates Mongo Map-Reduce as well. As most of my recent Blogs that use Spring, I am going to be using a pure JavaConfig approach to the example.
1. Setting up Spring Mongo
The Spring API provides an abstract Spring Java Config class, org.springframework.data.mongodb.config.AbstractMongoConfiguration. This class requires the following methods to be implemented, getDatabaseName() and mongo() which returns a Mongo instance. The class also has a method to create a MongoTemplate. Extending the mentioned class, the following is a Mongo Config:
As per my former example, we have four primary objects that comprise our domain. A Product in the system such as an XBOX, WII, PS3 etc. ACustomer who purchases items by creating an Order. An Order has references to LineItem(s) which in turn have a quantity and a reference to a Product for that line.
2.1 The Order model object looks like the following:
01.// @Document to indicate the orders collection
02.@Document(collection = "orders")
03.publicclassOrder {
04.// Identifier
05.@Id
06.privateObjectId id;
07.
08.// DB Reference to a Customer. This is a Link to a Customer from the Customer collection
09.@DBRef
10.privateCustomer customer;
11.
12.// Line items are part of the Order and do not exist independently of the order
13.privateList<LineItem> lines;
14....
15.}
The identifier of a POJO can be ObjectId, String or BigInteger. Note that Orders is its own rightful mongo collection however, as LineItems do not exist without the context of an order, they are embedded. A Customer however might be associated with multiple orders and thus the@DBRef annotation is used to link to a Customer.
3. Implementing the DAO pattern
One can use the Mongo Template directly or extend or compose a DAO class that provides standard CRUD operations. I have chosen the extension route for this example. The Spring Mongo API provides an interface org.springframework.data.repository.CrudRepository that defines methods as indicated by the name for CRUD operations. An extention to this interface is theorg.springframework.data.repository.PagingAndSortingRepository which provides methods for paginated access to the data. One implementation of these interfaces is the SimpleMongoRepository which the DAO implementations in this example extend:
01.// OrderDao interface exposing only certain operations via the API
One of the quirks that I found is that I was not able to use Criteria.where("lines.product").is(product) but had to instead resort to using the $id. I believe this is a BUG and will be fixed. Another peculiarity I found between Mongo 1.0.2.RELEASE and the milestone of 1.1.0.M1 was in thesave() method of SimpleMongoRepository:
1.//1.0.2.RELEASE
2.public<T> T save(T entity) {
3.}
4.
5.// 1.1.0.M1
6.public<S extendsT> S save(S entity) {
7.}
Although the above will not cause a Runtime error upon upgrading due to erasure, it will force a user to have to override the save() or similar methods during compile time. If upgrading from 1.0.2.RELEASE to 1.1.0.M1, you will have to add the following to the OrderDaoImpl in order for it to compile:
The Order object has the following two properties, createDate and lastUpdate date which are updated prior to persisting the object. To listen for life cycle events, an implemenation of the org.springframework.data.mongodb.core.mapping.event.AbstractMongoEventListener can be provided that defines methods for life cycle listening. In the example provide we override the onBeforeConvert() method to set the create and lastUpdateDate properties.
The Spring Data API for Mongo has support for Indexing and ensuring the presence of indices as well. An index can be created using the MongoTemplate via:
The MongoTemplate supports common map reduce operations. I am leaning on the basic example from the Spring Data site and enhancing it to work with the comments example I have used in all my M/R examples in the past. A collection is created for Comments and it contains data like:
{ "_id" : ObjectId("4e5ff893c0277826074ec533"), "commenterId" : "jamesbond", "comment":"James Bond lives in a cave", "country" : "INDIA"] }
{ "_id" : ObjectId("4e5ff893c0277826074ec535"), "commenterId" : "nemesis", "comment":"Bond uses Walther PPK", "country" : "RUSSIA"] }
{ "_id" : ObjectId("4e2ff893c0277826074ec534"), "commenterId" : "ninja", "comment":"Roger Rabit wanted to be on Geico", "country" : "RUSSIA"] }
The map reduce works of JSON files for the mapping and reducing functions. For the mapping function we have mapComments.js which only maps certain words:
function () {
var searchingFor = new Array("james", "2012", "cave", "walther", "bond");
var commentSplit = this.comment.split(" ");
for (var i = 0; i < commentSplit.length; i++) {
for (var j = 0; j < searchingFor.length; j++) {
if (commentSplit[i].toLowerCase() == searchingFor[j]) {
emit(commentSplit[i], 1);
}
}
}
}
For the reduce operation, another javascript file reduce.js:
function (key, values) {
var sum = 0;
for (var i = 0; i < values.length; i++) {
sum += values[i];
}
return sum;
}
The mapComment.js and the reduce.js are made available in the classpath and the M/R operation is invoked as shown below:
As always, the Spring folks keep impressing me with their API. Even with the change to their API, they preserved binary backward compatibility thus making an upgrade easy. The MongoTemplate supports common M/R operations, sweet! I have not customized the M/R code to my liking but its only a demo after all.
I quite liked the API, it is intuitive and easy to learn. I clearly have not explored all the options but then I am not really using Mongo at work to do the same ;-)
How to create a Spring MVC project with SpringData and MongoDB database from scratch.
Pre requests: - eclipse and maven plugin for eclipse - maven ( command line, optional ) - mongodb - java jdk 1.6
In eclipse create a new maven project - File - New - Others - Maven - Maven Project - Check create a simple project - select .war and fill the blanks
At the end of this process you should have a java project and minimal pom.xml Add the spring, springdata and mongodb dependencies in pom.xml
Create controler, service, model and repository packages.
Model package will include simple POJO classes. These classes will be modeled over the mongoDB collections.
User.java Class:
package com.flgor.model;
import org.springframework.data.annotation.Id;
public class User {
@Id
String id;
String name;
String password;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
Repository package will contain only interfaces. These interfaces will extend MongoRepository interface. Into these interfaces we can declare Query methods. ( check springdata documentation for details )
Using spring data ( mongotemplate and mongorepository ) it's very eazy and natural to work over the mongo database.
Next is a simple service.java exemple (service package ). It's implemented mongo collection clean, insert, findall and find by name inside the initialise method.
package com.flgor.service;
import java.util.List;
import javax.annotation.PostConstruct;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.document.mongodb.MongoTemplate;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.flgor.model.User;
import com.flgor.repository.IProductRepository;
import com.flgor.repository.IUserRepository;
import com.flgor.model.Product;
@Repository
@Transactional
public class ApplicationService {
@Autowired
private IProductRepository productRepository;
@Autowired
private IUserRepository userRepository;
@Autowired
private MongoTemplate mongoTemplate;
private final static Logger logger = Logger
.getLogger(ApplicationService.class);
@PostConstruct
public void initialise() {
// Clean User and Product Database
mongoTemplate.dropCollection("user");
productRepository.deleteAll();
// Add an user using MongoRepository
User user = new User();
user.setName("admin");
user.setPassword("admin");
userRepository.save(user);
// Add products using MongoTemplate
for (int i = 0; i < 10; i++) {
Product product = new Product();
product.setName("product " + i);
product.setPrice((float) (i * 100));
mongoTemplate.save("product", product);
}
List < Product > productList = productRepository.findAll();
for (Product prod : productList) {
logger.info(prod.getName());
}
// findByName is only declared in IUserRepository
// the magic is done automatically by Spring
List < User > userList = userRepository.findByName("admin");
logger.info("First user is: " + userList.get(0).getName());
}
}
For moment controller it's not implemented. Also there is no viewer resolver. Start mongodb (./bin/mongod ) Start the tutorial with mvn tomcat:run mvn clean install may help.
The applicationService output at startup:
1423 [main] INFO com.flgor.service.ApplicationService - product 0
1423 [main] INFO com.flgor.service.ApplicationService - product 1
1423 [main] INFO com.flgor.service.ApplicationService - product 2
1423 [main] INFO com.flgor.service.ApplicationService - product 3
1423 [main] INFO com.flgor.service.ApplicationService - product 4
1423 [main] INFO com.flgor.service.ApplicationService - product 5
1423 [main] INFO com.flgor.service.ApplicationService - product 6
1423 [main] INFO com.flgor.service.ApplicationService - product 7
1423 [main] INFO com.flgor.service.ApplicationService - product 8
1423 [main] INFO com.flgor.service.ApplicationService - product 9
1440 [main] INFO com.flgor.service.ApplicationService - First user is: admin
Into the the mongo database will be crated a user collection( with 1 entry ) and product collection( with 10 entries).
A tipical servletName-servlet.xml for SPRING MVC project:
Use Import -> Maven -> Existing Maven Project to import the project.
Used links: http://static.springsource.org/spring-data/data-document/docs/1.0.0.M3/reference/html/ http://krams915.blogspot.com/2011/04/spring-data-mongodb-revision-for-100m2.html
In this part of my blog series I’m going to show how easy it is to access a MongoDB datastore with Spring Data MongoDB.
MongoDB
MongoDB is a so called NoSQL datastore for document-oriented storage. A good place to start with MongoDB is the Developer Zone on the project’s homepage. After downloading and installing MongoDB we create a folder for data storage and start the server with
and are welcomed by a web admin interface at http://localhost:28017/. To play around with MongoDB, use the interactive mongo shell:
C:\dev\bin\mongo\bin>mongo
MongoDB shell version: 2.0.2
connecting to: test> show dbs
admin (empty)local(empty)test 0.078125GB
> show collections
foo
system.indexes
> db.foo.save({a:1, b:"bar"})> db.foo.save({a:1, b:"bar"})> db.foo.save({c:2, d:"doo"})> db.foo.find(){"_id" : ObjectId("4f1e575efc25822cd8ff8cf2"), "a" : 1, "b" : "bar"}{"_id" : ObjectId("4f1e5766fc25822cd8ff8cf3"), "a" : 1, "b" : "bar"}{"_id" : ObjectId("4f1e5771fc25822cd8ff8cf4"), "c" : 2, "d" : "doo"}
We display the names of the databases, than the collections (a collection is a logical namespace) inside the default databasetest. After that, we persists three documents in JSON notation. Doing so we observe:
each document has a unique id
there may be more than one document holding the same attribute set in the same collection
documents with different structures can be stored in the same collection
So a collection is really not the same thing as a table of a relational database. We also have no support for ACID transaction handling. Welcome to the cloud!
Spring Data MongoDB
Spring Data MongoDB works basically the same way as Spring Data JPA: you define your custom repository finders by writing only interface methods and Spring provides an implementation at runtime. The basic CRUD operation are supported without the need to write a single line of code.
Configuration
First of all we let Maven download the latest realeae version of Spring Data MongoDB:
Using the mongo namespace your Spring application context can be configured quite easy:
<!-- Connection to MongoDB server --><mongo:db-factoryhost="localhost"port="27017"dbname="test"/><!-- MongoDB Template --><beanid="mongoTemplate"class="org.springframework.data.mongodb.core.MongoTemplate"><constructor-argname="mongoDbFactory"ref="mongoDbFactory"/></bean><!-- Package w/ automagic repositories --><mongo:repositoriesbase-package="mongodb"/>
The connection to our MongoDB server and the database to use are configured with the <mongo:db-factory .../> tag. For fine tuning of the connection (connection pooling, clustering etc.) use the elements <mongo:mongo> und<mongo:options/> instead. Then we define a template that refers our DB factory. Finally we have to configure the package holding our repository interfaces (same as with Spring Data JPA). By default the only MongoDBTemplate inside the application context is used. If there are more than one template, you can specify which one to use with <mongo:repositories mongo-template-ref="...">.
Example
Similar to the blog post on Spring Data JPA we like to persist some simple User objects:
You may have noticed that a collection named user was created on the fly. If you want a non-default collection name (the lowercase name of the Java class), use the document annotation: @Document(collection="..."). The full qualified class name is persisted with the _class attribute. There are two indexes now: the default index for the id attribute and the index generated from the class attribute fullName with the @Indexed annotation.
With the @Query annotation you can define random queries in MongoDB syntax. The second query shows a finder that provides a search with regular expressions. When writing your first queries the comparison between MongoDB and SQL can be very helpful.
The complete source code of the example can be downloaded from Github.
MongoDBTemplate
Not all MongoDB features are exposed with the interface based repository approach. If you want to manage collections or usemap/reduce, you have to use the API of the MongoDBTemplate.
Summary
After a short introduction to MongoDB we were able to persist the first object very fast using Spring Data MongoDB. After that, we wrote custom finders with just a few lines of code.
A Spring application using Spring Data MongoDB as a persistence layer can be deployed to a cloud platform like CloudFoundry. This blog post show how easy that can be done.
2011-08-13 11:34:29,290 [main] DEBUG org.springframework.data.document.mongodb.mapping.MongoPersistentEntityIndexCreator - Analyzing class class com.ks.code.collector.domain.CodeLog for index information.
While both methods are functionally equivalent and all settings are similar, the configuration file method is preferable. If you installed from a package and have started MongoDB using your system’s control script, you’re already using a configuration file.
To start mongod or mongos using a config file, use one of the following forms:
Declare all settings in this file using the following form:
<setting> = <value>
New in version 2.0: Before version 2.0, Boolean (i.e. true|false) or “flag” parameters, register as true, if they appear in the configuration file, regardless of their value.
Settings
verbose
Default: false
Increases the amount of internal reporting returned on standard output or in the log file generated by logpath. To enable verbose or to enable increased verbosity with vvvv, set these options as in the following example:
verbose=truevvvv=true
MongoDB has the following levels of verbosity:
v
Default: false
Alternate form of verbose.
vv
Default: false
Additional increase in verbosity of output and logging.
vvv
Default: false
Additional increase in verbosity of output and logging.
vvvv
Default: false
Additional increase in verbosity of output and logging.
vvvvv
Default: false
Additional increase in verbosity of output and logging.
port
Default: 27017
Specifies a TCP port for the mongod or mongos instance to listen for client connections. UNIX-like systems require root access for ports with numbers lower than 1024.
bind_ip
Default: All interfaces.
Set this option to configure the mongod or mongos process to bind to and listen for connections from applications on this address. You may attach mongod ormongos instances to any interface; however, if you attach the process to a publicly accessible interface, implement proper authentication or firewall restrictions to protect the integrity of your database.
You may concatenate a list of comma separated values to bind mongod to multiple IP addresses.
maxConns
Default: depends on system (i.e. ulimit and file descriptor) limits. Unless set, MongoDB will not limit its own connections.
Specifies a value to set the maximum number of simultaneous connections that mongod or mongos will accept. This setting has no effect if it is higher than your operating system’s configured maximum connection tracking threshold.
This is particularly useful for mongos if you have a client that creates a number of collections but allows them to timeout rather than close the collections. When you set maxConns, ensure the value is slightly higher than the size of the connection pool or the total number of connections to prevent erroneous connection spikes from propagating to the members of a shard cluster.
Note
You cannot set maxConns to a value higher than 20000.
objcheck
Default: true
Changed in version 2.4: The default setting for objcheck became true in 2.4. In earlier versions objcheck was false by default.
Forces the mongod to validate all requests from clients upon receipt to ensure that clients never insert invalid documents into the database. For objects with a high degree of sub-document nesting, objcheck can have a small impact on performance. You can set noobjcheck to disable object checking at run-time.
noobjcheck
New in version 2.4.
Default: false
Disables the default object validation that MongoDB performs on all incoming BSON documents.
logpath
Default: None. (i.e. /dev/stdout)
Specify the path to a file name for the log file that will hold all diagnostic logging information.
Unless specified, mongod will output all log information to the standard output. Unless logappend is true, the logfile will be overwritten when the process restarts.
Note
Currently, MongoDB will overwrite the contents of the log file if the logappend is not used. This behavior may change in the future depending on the outcome of SERVER-4499.
logappend
Default: false
Set to true to add new entries to the end of the logfile rather than overwriting the content of the log when the process restarts.
If this setting is not specified, then MongoDB will overwrite the existing logfile upon start up.
Note
The behavior of the logging system may change in the near future in response to the SERVER-4499 case.
syslog
New in version 2.1.0.
Sends all logging output to the host’s syslog system rather than to standard output or a log file as with logpath.
Warning
You cannot use syslog with logpath.
pidfilepath
Default: None.
Specify a file location to hold the “PID” or process ID of the mongod process. Useful for tracking the mongod process in combination with the fork setting.
Without a specified pidfilepath, mongos creates no PID file.
Set to true to disable listening on the UNIX socket. mongod and mongos always listen on the UNIX socket, unless nounixsocket is set, bind_ip is notset, or bind_ip specifies 127.0.0.1.
unixSocketPrefix
Default:/tmp
Specifies a path for the UNIX socket. Unless this option has a value mongod creates a socket with /tmp as a prefix.
MongoDB will always create and listen on a UNIX socket, unless nounixsocket is set, bind_ip is not set, or bind_ip specifies 127.0.0.1.
fork
Default: false
Set to true to enable a daemon mode for mongod that runs the process in the background.
auth
Default: false
Set to true to enable database authentication for users connecting from remote hosts. Configure users via the mongo shell. If no users exist, the localhost interface will continue to have access to the database until the you create the first user.
cpu
Default: false
Set to true to force mongod to report every four seconds CPU utilization and the amount of time that the processor waits for I/O operations to complete (i.e. I/O wait.) MongoDB writes this data to standard output, or the logfile if using the logpath option.
dbpath
Default:/data/db/
Set this value to designate a directory for the mongod instance to store its data. Typical locations include: /srv/mongodb, /var/lib/mongodb or/opt/mongodb
Unless specified, mongod will look for data files in the default /data/db directory. (Windows systems use the \data\db directory.) If you installed using a package management system. Check the /etc/mongodb.conf file provided by your packages to see the configuration of the dbpath.
diaglog
Default: 0
Creates a very verbose, diagnostic log for troubleshooting and recording various errors. MongoDB writes these log files in the dbpath directory in a series of files that begin with the string diaglog with the time logging was initiated appended as a hex string.
The value of this setting configures the level of verbosity. Possible values, and their impact are as follows.
Value
Setting
0
off. No logging.
1
Log write operations.
2
Log read operations.
3
Log both read and write operations.
7
Log write and some read operations.
You can use the mongosniff tool to replay this output for investigation. Given a typical diaglog file, located at /data/db/diaglog.4f76a58c, you might use a command in the following form to read these files:
diaglog is for internal use and not intended for most users.
Warning
Setting the diagnostic level to 0 will cause mongod to stop writing data to the diagnostic log file. However, the mongod instance will continue to keep the file open, even if it is no longer writing data to the file. If you want to rename, move, or delete the diagnostic log you must cleanly shut down the mongod instance before doing so.
directoryperdb
Default: false
Set to true to modify the storage pattern of the data directory to store each database’s files in a distinct folder. This option will create directories within thedbpath named for each directory.
Use this option in conjunction with your file system and device configuration so that MongoDB will store data on a number of distinct disk devices to increase write throughput or disk capacity.
Warning
If you have an existing mongod instance and dbpath, and you want to enable directoryperdb, you must migrate your existing databases to directories before setting directoryperdb to access those databases.
Example
Given a dbpath directory with the following items:
Set to true to enable operation journaling to ensure write durability and data consistency.
Set to false to prevent the overhead of journaling in situations where durability is not required. To reduce the impact of the journaling on disk usage, you can leave journal enabled, and set smallfiles to true to reduce the size of the data and journal files.
Note
You must use nojournal to disable journaling on 64-bit systems.
journalCommitInterval
Default: 100 or 30
Set this value to specify the maximum amount of time for mongod to allow between journal operations. Lower values increase the durability of the journal, at the possible expense of disk performance.
The default journal commit interval is 100 milliseconds if a single block device (e.g. physical volume, RAID device, or LVM volume) contains both the journal and the data files.
If different block devices provide the journal and data files the default journal commit interval is 30 milliseconds.
This option accepts values between 2 and 300 milliseconds.
To force mongod to commit to the journal more frequently, you can specify j:true. When a write operation with j:true is pending, mongod will reducejournalCommitInterval to a third of the set value.
ipv6
Default: false
Set to true to IPv6 support to allow clients to connect to mongod using IPv6 networks. mongod disables IPv6 support by default in mongod and all utilities.
jsonp
Default: false
Set to true to permit JSONP access via an HTTP interface. Consider the security implications of allowing this activity before setting this option.
noauth
Default: true
Disable authentication. Currently the default. Exists for future compatibility and clarity.
For consistency use the auth option.
nohttpinterface
Default: false
Set to true to disable the HTTP interface. This command will override the rest and disable the HTTP interface if you specify both.
Changed in version 2.1.2: The nohttpinterface option is not available for mongos instances before 2.1.2
nojournal
Default: (on 64-bit systems) false
Default: (on 32-bit systems) true
Set nojournal=true to disable durability journaling. By default, mongod enables journaling in 64-bit versions after v2.0.
Note
You must use journal to enable journaling on 32-bit systems.
noprealloc
Default: false
Set noprealloc=true to disable the preallocation of data files. This will shorten the start up time in some cases, but can cause significant performance penalties during normal operations.
noscripting
Default: false
Set noscripting=true to disable the scripting engine.
notablescan
Default: false
Set notablescan=true to forbid operations that require a table scan.
nssize
Default: 16
Specify this value in megabytes. The maximum size is 2047 megabytes.
Use this setting to control the default size for all newly created namespace files (i.e .ns). This option has no impact on the size of existing namespace files.
Modify this value to changes the level of database profiling, which inserts information about operation performance into output of mongod or the log file if specified by logpath. The following levels are available:
Level
Setting
0
Off. No profiling.
1
On. Only includes slow operations.
2
On. Includes all operations.
By default, mongod disables profiling. Database profiling can impact database performance because the profiler must record and process all database operations. Enable this option only after careful consideration.
quota
Default: false
Set to true to enable a maximum limit for the number data files each database can have. The default quota is 8 data files, when quota is true. Adjust the quota size with the with the quotaFiles setting.
quotaFiles
Default: 8
Modify limit on the number of data files per database. This option requires the quota setting.
Set to true to run a repair routine on all databases following start up. In general you should set this option on the command line and not in the configuration fileor in a control script.
Use the mongod --repair option to access this functionality.
Note
Because mongod rewrites all of the database files during the repair routine, if you do not run repair under the same user account asmongod usually runs, you will need to run chown on your database files to correct the permissions before starting mongod again.
repairpath
Default: A _tmp directory in the dbpath.
Specify the path to the directory containing MongoDB data files, to use in conjunction with the repair setting or mongod --repair operation. Defaults to a_tmp directory within the dbpath.
slowms
Default: 100
Specify values in milliseconds.
Sets the threshold for mongod to consider a query “slow” for the database profiler. The database logs all slow queries to the log, even when the profiler is not turned on. When the database profiler is on, mongod the profiler writes to the system.profile collection.
See also
“profile“
smallfiles
Default: false
Set to true to modify MongoDB to use a smaller default data file size. Specifically, smallfiles reduces the initial size for data files and limits them to 512 megabytes. The smallfiles setting also reduces the size of each journal files from 1 gigabyte to 128 megabytes.
Use the smallfiles setting if you have a large number of databases that each hold a small quantity of data. The smallfiles setting can lead mongod to create many files, which may affect performance for larger databases.
syncdelay
Default: 60
mongod writes data very quickly to the journal, and lazily to the data files. syncdelay controls how much time can pass before MongoDB flushes data to thedatabase files via an fsync operation. The default setting is 60 seconds. In almost every situation you should not set this value and use the default setting.
syncdelay has no effect on the journal files or journaling.
Warning
If you set syncdelay to 0, MongoDB will not sync the memory mapped files to disk. Do not set this value on production systems.
sysinfo
Default: false
When set to true, mongod returns diagnostic system information regarding the page size, the number of physical pages, and the number of available physical pages to standard output.
More typically, run this operation by way of the mongod --sysinfo command. When running with the sysinfo, only mongod only outputs the page information and no database process will start.
upgrade
Default: false
When set to true this option upgrades the on-disk data format of the files specified by the dbpath to the latest version, if needed.
This option only affects the operation of mongod if the data files are in an old format.
When specified for a mongos instance, this option updates the meta data format used by the config database.
Note
In most cases you should not set this value, so you can exercise the most control over your upgrade process. See the MongoDBrelease notes (on the download page) for more information about the upgrade process.
traceExceptions
Default: false
For internal diagnostic use only.
quiet
Default: false
Runs the mongod or mongos instance in a quiet mode that attempts to limit the amount of output. This option suppresses:
For production systems this option is not recommended as it may make tracking problems during particular connections much more difficult.
setParameter
New in version 2.4.
Specifies an option to configure on startup. Specify multiple options with multiple setParameter options. See mongod Parameters for full documentation of these parameters. The setParameter database command provides access to many of these parameters.
Declare all setParameter settings in this file using the following form:
setParameter= <parameter>=<value>
For mongod the following options are available using setParameter:
Use this setting to configure replication with replica sets. Specify a replica set name as an argument to this set. All hosts must have the same set name.
Specifies a maximum size in megabytes for the replication operation log (e.g. oplog.) mongod creates an oplog based on the maximum amount of space available. For 64-bit systems, the oplog is typically 5% of available disk space.
Once the mongod has created the oplog for the first time, changing oplogSize will not affect the size of the oplog.
fastsync
Default: false
In the context of replica set replication, set this option to true if you have seeded this member with a snapshot of the dbpath of another member of the set. Otherwise the mongod will attempt to perform an initial sync, as though the member were a new member.
Warning
If the data is not perfectly synchronized andmongod starts with fastsync, then the secondary or slave will be permanently out of sync with the primary, which may cause significant consistency problems.
replIndexPrefetch
New in version 2.2.
Default:all
Values:all, none, and _id_only
You can only use replIndexPrefetch in conjunction with replSet.
By default secondary members of a replica set will load all indexes related to an operation into memory before applying operations from the oplog. You can modify this behavior so that the secondaries will only load the _id index. Specify _id_only or none to prevent the mongod from loadingany index into memory.
Master/Slave Replication
master
Default: false
Set to true to configure the current instance to act as master instance in a replication configuration.
slave
Default: false
Set to true to configure the current instance to act as slave instance in a replication configuration.
source
Default: <>
Form: <host><:port>
Used with the slave setting to specify the master instance from which this slave instance will replicate
only
Default: <>
Used with the slave option, only specifies only a single database to replicate.
slaveDelay
Default: 0
Used with the slave setting, slaveDelay configures a “delay” in seconds, for this slave to wait to apply operations from the master instance.
autoresync
Default: false
Used with the slave setting, set autoresync to true to force the slave to automatically resync if it is more than 10 seconds behind the master. This setting may be problematic if the oplogSize of the oplog is too small. If the oplog is not large enough to store the difference in changes between the master’s current state and the state of the slave, this instance will forcibly resync itself unnecessarily. When you set the autoresyncoption to false, the slave will not attempt an automatic resync more than once in a ten minute period.
Sharded Cluster Options
configsvr
Default: false
Set this value to true to configure this mongod instance to operate as the config database of a shard cluster. When running with this option, clients will not be able to write data to any database other than config and admin. The default port for a mongod with this option is 27019 and the defaultdbpath directory is /data/configdb, unless specified.
Changed in version 2.2: configsvr also sets smallfiles.
Changed in version 2.4: configsvr creates a local oplog.
Do not use configsvr with replSet or shardsvr. Config servers cannot be a shard server or part of a replica set.
default port for mongod with this option is 27019 and mongod writes all data files to the /configdb sub-directory of the dbpath directory.
shardsvr
Default: false
Set this value to true to configure this mongod instance as a shard in a partitioned cluster. The default port for these instances is 27018. The only affect of shardsvr is to change the port number.
configdb
Default: None.
Format:<config1>,<config2><:port>,<config3>
Set this option to specify a configuration database (i.e. config database) for the sharded cluster. You must specify either 1 configuration server or 3 configuration servers, in a comma separated list.
mongos instances read from the first config server in the list provided. All mongos instances must specify the hosts to theconfigdb setting in the same order.
If your configuration databases reside in more that one data center, order the hosts in the configdb setting so that the config database that is closest to the majority of your mongos instances is first servers in the list.
Warning
Never remove a config server from the configdb parameter, even if the config server or servers are not available, or offline.
test
Default: false
Only runs unit tests and does not start a mongos instance.
This setting only affects mongos processes and is for internal testing use only.
chunkSize
Default: 64
The value of this option determines the size of each chunk of data distributed around the sharded cluster. The default value is 64 megabytes. Larger chunks may lead to an uneven distribution of data, while smaller chunks may lead to frequent and unnecessary migrations. However, in some circumstances it may be necessary to set a different chunk size.
This setting only affects mongos processes. Furthermore, chunkSizeonly sets the chunk size when initializing the cluster for the first time. If you modify the run-time option later, the new value will have no effect. See the “Modify Chunk Size” procedure if you need to change the chunk size on an existing sharded cluster.
localThreshold
New in version 2.2.
localThreshold affects the logic that program:mongos uses when selecting replica set members to pass reads operations to from clients. Specify a value to localThreshold in milliseconds. The default value is 15, which corresponds to the default value in all of the client drivers.
When mongos receives a request that permits reads to secondary members, the mongos will:
find the member of the set with the lowest ping time.
construct a list of replica set members that is within a ping time of 15 milliseconds of the nearest suitable member of the set.
If you specify a value for localThreshold, mongos will construct the list of replica members that are within the latency allowed by this value.
The mongos will select a member to read from at random from this list.
The ping time used for a set member compared by the localThreshold setting is a moving average of recent ping times, calculated, at most, every 10 seconds. As a result, some queries may reach members above the threshold until the mongos recalculates the average.
noAutoSplit is for internal use and is only available on mongos instances.
New in version 2.0.7.
noAutoSplit prevents mongos from automatically inserting metadata splits in a sharded collection. If set on all mongos, this will prevent MongoDB from creating new chunks as the data in a collection grows.
Because any mongos in a cluster can create a split, to totally disable splitting in a cluster you must set noAutoSplit on all mongos.
Warning
With noAutoSplit enabled, the data in your sharded cluster may become imbalanced over time. Enable with caution.
Enables SSL for mongod or mongos. With sslOnNormalPorts, a mongod or mongos requires SSL encryption for all connections on the default MongoDB port, or the port specified by port. By default, sslOnNormalPorts is disabled.
Specifies the password to de-crypt the certificate-key file (i.e. sslPEMKeyFile). Only use sslPEMKeyPassword if the certificate-key file is encrypted. In all cases, mongod or mongos will redact the password from all logging and reporting output.
Changed in version 2.4: sslPEMKeyPassword is only needed when the private key is encrypted. In earlier versions mongod or mongos would require sslPEMKeyPassword whenever using sslOnNormalPorts, even when the private key was not encrypted.
Specifies the .pem file that contains the root certificate chain from the Certificate Authority. Specify the file name of the .pem file using relative or absolute paths
Disables the requirement for SSL certificate validation, that sslCAFile enables. With sslWeakCertificateValidation, mongod ormongos will accept connections if the client does not present a certificate when establishing the connection.
If the client presents a certificate and mongod or mongos has sslWeakCertificateValidation enabled, mongod or mongos will validate the certificate using the root certificate chain specified by sslCAFile, and reject clients with invalid certificates.
Use sslWeakCertificateValidation if you have a mixed deployment that includes clients that do not or cannot present certificates to mongodor mongos.
When specified, mongod or mongos will use the FIPS mode of the installed OpenSSL library. Your system must have a FIPS compliant OpenSSL library to use sslFIPSMode.