source - http://forum.spring.io/forum/spring-projects/data/128633-unable-to-insert-data-into-mysql-with-jpa-hibernate-spring
source - http://www.java2s.com/Tutorial/Java/0355__JPA/NativeInsertStatementWithParameter.htm
As we have just released the first milestone of the Spring Data JPA project I’d like to give you a quick introduction into its features. As you probably know, the Spring framework provides support to build a JPA based data access layer. So what does Spring Data JPA add to this base support? To answer that question I'd like to start with the data access components for a sample domain implemented using plain JPA + Spring and point out areas that leave room for improvement. After we've done that I will refactor the implementations to use the Spring Data JPA features to address these problem areas. The sample project as well as a step by step guide of the refactoring steps can be found on Github.
The domain
To keep things simple we start with a tiny well-known domain: we have Customer
s that haveAccount
s.
@Entity
public class Customer {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String firstname;
private String lastname;
// … methods omitted
}
@Entity
public class Account {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
@ManyToOne
private Customer customer;
@Temporal(TemporalType.DATE)
private Date expiryDate;
// … methods omitted
}
The Account
has an expiry date that we will use at a later stage. Beyond that there's nothing really special about the classes or the mapping - it uses plain JPA annotations. Now let's take a look at the component managing Account
objects:
@Repository
@Transactional(readOnly = true)
class AccountServiceImpl implements AccountService {
@PersistenceContext
private EntityManager em;
@Override
@Transactional
public Account save(Account account) {
if (account.getId() == null) {
em.persist(account);
return account;
} else {
return em.merge(account);
}
}
@Override
public List<Account> findByCustomer(Customer customer) {
TypedQuery query = em.createQuery("select a from Account a where a.customer = ?1", Account.class);
query.setParameter(1, customer);
return query.getResultList();
}
}
I deliberately named the class *Service
to avoid name clashes as we will introduce a repository layer when we start refactoring. But conceptually the class here is a repository rather than a service. So what do we have here actually?
The class is annotated with @Repository
to enable exception translation from JPA exceptions to Spring's DataAccessException
hierarchy. Beyond that we use @Transactional
to make sure the save(…)
operation is running in a transaction and to allow setting the readOnly
-flag (at the class level) for findByCustomer(…)
. This causes some performance optimizations inside the persistence provider as well as on the database level.
As we want to free the clients from the decision whether to call merge(…)
or persist(…)
on the EntityManager
we use the id
-field of the Account
to decide whether we consider anAccount
object as new or not. This logic could of course be extracted into a common superclass as we probably don't want to repeat this code for every domain object specific repository implementation. The query method is quite straight forward as well: we create a query, bind a parameter and execute the query to get a result. It's almost so straight forward that one could regard the implementation code as boilerplate as with a little bit of imagination it's derivable from the method signature: we expect a List
of Account
s, the query is quite close to the method name and we simply bind the method parameter to it. So as you can see, there‘s room for improvement.
Spring Data repository support
Before we start refactoring the implementation, note that the sample project contains test cases that can be run in the course of the refactoring to verify the code still works. Let's now see how we can improve the implementation.
Spring Data JPA provides a repository programming model that starts with an interface per managed domain object:
public interface AccountRepository extends JpaRepository<Account, Long> { … }
Defining this interface serves two purposes: First, by extending JpaRepository
we get a bunch of generic CRUD methods into our type that allows saving Account
s, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
To have Spring create a bean that implements this interface, all you need to do is use the Spring JPA namespace and activate the repository support using the appropriate element:
<jpa:repositories base-package="com.acme.repositories" />
This scans all packages below com.acme.repositories
for interfaces extendingJpaRepository
and creates a Spring bean for it that is backed by an implementation ofSimpleJpaRepository
. Let's take a first step and refactor our AccountService
implementation a little bit to use our newly introduced repository interface:
@Repository
@Transactional(readOnly = true)
class AccountServiceImpl implements AccountService {
@PersistenceContext
private EntityManager em;
@Autowired
private AccountRepository repository;
@Override
@Transactional
public Account save(Account account) {
return repository.save(account);
}
@Override
public List<Account> findByCustomer(Customer customer) {
TypedQuery query = em.createQuery("select a from Account a where a.customer = ?1", Account.class);
query.setParameter(1, customer);
return query.getResultList();
}
}
After this refactoring, we simply delegate the call to save(…)
to the repository. By default the repository implementation will consider an entity new if its id
-property is null
just like you saw in the previous example (note, you can can gain more detailed control over that decision if necessary). Additionally, we can get rid of the @Transactional
annotation for the method as the CRUD methods of the Spring Data JPA repository implementation are already annotated with @Transactional
.
Next we will refactor the query method. Let’s follow the same delegating strategy for the query method as with the save method. We introduce a query method on the repository interface and have our original method delegate to that newly introduced method:
@Transactional(readOnly = true)
public interface AccountRepository extends JpaRepository<Account, Long> {
List<Account> findByCustomer(Customer customer);
}
@Repository
@Transactional(readOnly = true)
class AccountServiceImpl implements AccountService {
@Autowired
private AccountRepository repository;
@Override
@Transactional
public Account save(Account account) {
return repository.save(account);
}
@Override
public List<Account> findByCustomer(Customer customer) {
return repository.findByCustomer(Customer customer);
}
}
Let me add a quick note on the transaction handling here. In this very simple case we could remove the @Transactional
annotations from the AccountServiceImpl
class entirely as the repository's CRUD methods are transactional and the query method is marked with@Transactional(readOnly = true)
at the repository interface already. The current setup, with methods at the service level marked as transactional (even if not needed for this case), is best because it is explicitly clear when looking at the service level that operations are happening in a transaction. Beyond that, if a service layer method was modified to do multiple calls to repository methods all the code would still execute inside a single transaction as the repository's inner transactions would simply join the outer one started at the service layer. The transactional behavior of the repositories and possibilities to tweak it are documented in detail in thereference documentation.
Try to run the test case again and see that it works. Stop, we didn't provide any implementation for findByCustomer(…)
right? How does this work?
Query methods
When Spring Data JPA creates the Spring bean instance for the AccountRepository
interface it inspects all query methods defined in it and derives a query for each of them. By default Spring Data JPA will automatically parses the method name and creates a query from it. The query is implemented using the JPA criteria API. In this case the findByCustomer(…)
method is logically equivalent to the JPQL query select a from Account a where a.customer = ?1
. The parser that analyzes the method name supports quite a large set of keywords such as And
, Or
,GreaterThan
, LessThan
, Like
, IsNull
, Not
and so on. You can also add OrderBy
clauses if you like. For a detailed overview please check out the reference documentation. This mechanism gives us a query method programming model like you're used to from Grails or Spring Roo.
Now let's suppose you want to be explicit about the query to be used. To do so you can either declare a JPA named query that follows a naming convention (in this caseAccount.findByCustomer
) in an annotation on the entity or in your orm.xml
. Alternatively you can annotate your repository method with @Query
:
@Transactional(readOnly = true)
public interface AccountRepository extends JpaRepository<Account, Long> {
@Query("<JPQ statement here>")
List<Account> findByCustomer(Customer customer);
}
Now let's do a before/after comparison of the CustomerServiceImpl
applying the features that we've seen so far:
@Repository
@Transactional(readOnly = true)
public class CustomerServiceImpl implements CustomerService {
@PersistenceContext
private EntityManager em;
@Override
public Customer findById(Long id) {
return em.find(Customer.class, id);
}
@Override
public List<Customer> findAll() {
return em.createQuery("select c from Customer c", Customer.class).getResultList();
}
@Override
public List<Customer> findAll(int page, int pageSize) {
TypedQuery query = em.createQuery("select c from Customer c", Customer.class);
query.setFirstResult(page * pageSize);
query.setMaxResults(pageSize);
return query.getResultList();
}
@Override
@Transactional
public Customer save(Customer customer) {
// Is new?
if (customer.getId() == null) {
em.persist(customer);
return customer;
} else {
return em.merge(customer);
}
}
@Override
public List<Customer> findByLastname(String lastname, int page, int pageSize) {
TypedQuery query = em.createQuery("select c from Customer c where c.lastname = ?1", Customer.class);
query.setParameter(1, lastname);
query.setFirstResult(page * pageSize);
query.setMaxResults(pageSize);
return query.getResultList();
}
}
Okay, let's create the CustomerRepository
and eliminate the CRUD methods first:
@Transactional(readOnly = true)
public interface CustomerRepository extends JpaRepository<Customer, Long> { … }
@Repository
@Transactional(readOnly = true)
public class CustomerServiceImpl implements CustomerService {
@PersistenceContext
private EntityManager em;
@Autowired
private CustomerRepository repository;
@Override
public Customer findById(Long id) {
return repository.findById(id);
}
@Override
public List<Customer> findAll() {
return repository.findAll();
}
@Override
public List<Customer> findAll(int page, int pageSize) {
TypedQuery query = em.createQuery("select c from Customer c", Customer.class);
query.setFirstResult(page * pageSize);
query.setMaxResults(pageSize);
return query.getResultList();
}
@Override
@Transactional
public Customer save(Customer customer) {
return repository.save(customer);
}
@Override
public List<Customer> findByLastname(String lastname, int page, int pageSize) {
TypedQuery query = em.createQuery("select c from Customer c where c.lastname = ?1", Customer.class);
query.setParameter(1, lastname);
query.setFirstResult(page * pageSize);
query.setMaxResults(pageSize);
return query.getResultList();
}
}
So far so good. What is left right now are two methods that deal with a common scenario: you don't want to access all entities of a given query but rather only a page of them (e.g. page 1 by a page size of 10). Right now this is addressed with two integers that limit the query appropriately. There are two issues with this. Both integers together actually represent a concept, which is not made explicit here. Beyond that we return a simple List
so we lose metadata information about the actual page of data: is it the first page? Is it the last one? How many pages are there in total? Spring Data provides an abstraction consisting of two interfaces: Pageable
(to capture pagination request information) as well as Page
(to capture the result as well as meta-information). So let's try to add findByLastname(…)
to the repository interface and rewritefindAll(…)
and findByLastname(…)
as follows:
@Transactional(readOnly = true)
public interface CustomerRepository extends JpaRepository<Customer, Long> {
Page<Customer> findByLastname(String lastname, Pageable pageable);
}
@Override
public Page<Customer> findAll(Pageable pageable) {
return repository.findAll(pageable);
}
@Override
public Page<Customer> findByLastname(String lastname, Pageable pageable) {
return repository.findByLastname(lastname, pageable);
}
Make sure you adapt the test cases according to the signature changes but then they should run fine. There are two things this boils down to here: we have CRUD methods supporting pagination and the query execution mechanism is aware of Pageable
parameters as well. At this stage our wrapping classes actually become obsolete as a client could have used our repository interfaces directly. We got rid of the entire implementation code.
Summary
In the course of this blog post we have reduced the amount of code to be written for repositories to two interfaces with 3 methods and a single line of XML:
@Transactional(readOnly = true)
public interface CustomerRepository extends JpaRepository<Customer, Long> {
Page<Customer> findByLastname(String lastname, Pageable pageable);
}
@Transactional(readOnly = true)
public interface AccountRepository extends JpaRepository<Account, Long> {
List<Account> findByCustomer(Customer customer);
}
<jpa:repositories base-package="com.acme.repositories" />
We have type safe CRUD methods, query execution and pagination built right in. The cool thing is that this is not only working for JPA based repositories but also for non-relational databases. The first non-relational database to support this approach will be MongoDB as part of the Spring Data Document release in a few days. You will get the exact same features for Mongo DB and we're working on support for other data bases as well.. There are also additional features to be explored (e.g. entity auditing, integration of custom data access code) which we will walk through in upcoming blog posts.
Actually, why wouldn't such a non-standard feature violate JSON specification? Such a construct does not conform to the JSON syntax, so by definition it seems to explicitly violate it. And the main problem is that accepting such invalid input reduces interoperability -- other parser will not accept such structures, and it may make users less likely to use json.simple, because its behavior differs from standard behavior.
RFC 4627 states:
It does not cause interoperability issues because the encoding is always right, and JSON.simple always accepts inputs that conform to the JSON grammar.
Actually, JSON.org's RI goes further on extension. It accepts single quote strings and hex number: ['total' : 0x20].
Yes, RI is non-compliant as well. It is true that specification allows for extensions; unfortunately there are no commonly defined "common" extensions and thus each parser seems to adopt their own favorite hacks.
But it is bit naive to think that accepting non-valid input would not lead to interoperability problems -- this is EXACTLY how browsers degenerated into accepting all kinds of broken html. So personally I think it is wrong to add short-cuts for what are essentially broken documents (missing values, or extra commas).
But it is your parser of course, and you can add any extensions that you like. :-)
Please read RFC4627 carefully, then you'll find the lines as below:
That is, the encoder should be always right, so the interoperability problem will never happen. The extension of the parser is to increase the robustness of the application. Both the RI and JSON.simple are fully compliant with RFC4627. If you insists that there will be interoperability problems, it's the problem of the specification, not the library. But the fact is, it will never happen.
I agree with tsaloranta.
Accepting non-compliant input creates the risk of letting non-compliant content spread (e.g. contents generated by hand, or by another system that has a flaw). As long as the only "client" to these contents is read by a "tolerant" parser you have no problem.
The day another system, using a more strict parser, comes into play, you have your interoperability problem, and it might be too late to come back.
arnauldvm, tsaloranta: Please take the time to review Postel's Law: "Be conservative in what you do; be liberal in what you accept from others."
http://en.wikipedia.org/wiki/Postel%27s_law
The responsibility for enforcing valid JSON lies with the encoder, not the decoder. The decoder's sole responsibility is to parse JSON. By gracefully accepting invalid input, the decoder becomes more robust and usable.
"Be conservative in what you do; be liberal in what you accept from others."
The complication lies in that "accepting" means making assumptions about the intention of the author. Does "5,,,2" mean "5,2", or "5,null,null,2", "5,0,0,2", or something else?
I suspect that if I were parsing, say, important medical data, I would need a strict mode, that would fail in this case, or at least warn somehow.
Hi keithdwinkler, if you are exchanging important data, say medical information or financial data, between applications, I think you need to make sure the following things in such scenarios:
In both cases, a liberal parser will do nothing harmful to your application.
The reason of accepting something like [5,,2] is that:
I know some users may be FUD in front of a liberal parser, but as I mentioned above, it's harmless and is allowed in RFC4627 (actually the author of RFC4627 adopts this feature in the reference implementation).
Please feel free to leave a comment or post in the discussion group if you have further concerns. Thanks.
Any particular reason why JSONValue.parse(s) returns an Object and not a more proper JSONObject?
Because we also have a JSONArray and other primitives.
I have a simple, irrefutable need for a completely strict parser: I have a piece of code that produces JSON, so I need a strict parser to validate it during unit tests. I tried to use JSONObject(), but of course JSONObject accepts invalid JSON, so my tests pass even though my program is incorrect.
Can someone recommend an easy-to-use, open-source Java library that throws an exception when it encounters invalid JSON?
To unit test the JSON output, compare it to a static string that is the expected output. If you attempt to do anything else you are doing one or more of:
Either of those is bad. The best solution is to compare the output to the expected value, which would be a string, rather than parsing it.
please PLEASE add strict mode. The reason being is that I want to know if system A is sending input like 1,,,,5 early on so that if we have system A start sending to other systems(not just json-simple), we won't know. it is easier to start strict and move to more flexible than it is to go from flexible to strict....heck try adding checkstyle to a 3.0 project vs. adding it to a 0.1 release project...big difference on making things stricter...it's harder to go one way then the other.
Maybe this isnt exactly the place to be asking questions, but im new to java and im having an issue with the container class example I literally copied and pasted the code into my class to attempt to decode the following which was passed back from the server side {"uid":2,"name":"john"}{"uid":3,"name":"mary"} so essentially (int,string) I believe the error is from the line return new LinkedHashMap?(); and the error is Unexpected token LEFT BRACE({) at position 23.
Any help would be appreciated Regards
@finbar, your JSON represents two objects. The code works with just one, ie {"uid":2,"name":"john"}. The error message is the parser complaining about the left brace at the start of the second object (ie 23 characters in).
Since the server sent separate JSON strings (it didn't return an array?), then you could call the example piece of code once for each individual string.
How can we disable the display of JSON value in "View Source" of a browser?
I noticed a critical error in the JSON Maps mentioned in the below code: Map json = (Map)parser.parse(jsonText, containerFactory);
Object obj=JSONValue.parse(json2.get("items").toString()); JSONArray array=(JSONArray)obj;There one of the key item is an array that has some strings, but it is failing to parse that array list, because it is removing all the quotes """ from the key values, so we cannot process it further as arrays, I think its a major flaw.
I tried to reproduce your major flaw but I'm not sure if you provided all the information needed.
Dear,
I have following json responce. I am not able to iterate through each Map. Please help me
{"status":"OK","result":{"1":{"Id":"3","Conferencce":"test3","Description":"test3","Admin":"919818559890","Moderator":null,"Keywords":"test3","StartDate?":"2011-11-19 12:22:33","EndDate?":"2011-11-19 14:22:33","Type":"both","MaxAtendee?":"0","MinAtendee?":"0","RegAtendee?":"0","DescVoiceVideo?":null,"Rating":null,"Status":"active","ApproveBy?":null,"ApprovedOn?":"2011-11-15 14:22:33","ApprovedReason?":null,"AdminPin?":null,"UserPin?":null,"PricePerMin?":null,"PricePerConf?":null,"ReminderStart?":null,"AdminJoin?":null,"CreatedOn?":"2011-11-17 13:31:27","CreatedBy?":"1"},"2":{"Id":"2","Conferencce":"test2","Description":"test","Admin":"919818559899","Moderator":null,"Keywords":"test2","StartDate?":"2011-11-18 12:22:33","EndDate?":"2011-11-18 14:22:33","Type":"both","MaxAtendee?":"0","MinAtendee?":"0","RegAtendee?":"0","DescVoiceVideo?":null,"Rating":null,"Status":"active","ApproveBy?":null,"ApprovedOn?":"2011-11-15 12:22:33","ApprovedReason?":null,"AdminPin?":null,"UserPin?":null,"PricePerMin?":null,"PricePerConf?":null,"ReminderStart?":null,"AdminJoin?":null,"CreatedOn?":"2011-11-17 13:31:20","CreatedBy?":"1"},"3":{"Id":"1","Conferencce":"test","Description":"tes","Admin":"919818559898","Moderator":null,"Keywords":"test","StartDate?":"2011-11-17 12:22:33","EndDate?":"2011-11-17 14:22:33","Type":"both","MaxAtendee?":"0","MinAtendee?":"0","RegAtendee?":"0","DescVoiceVideo?":null,"Rating":null,"Status":"active","ApproveBy?":"1","ApprovedOn?":"2011-11-15 12:22:33","ApprovedReason?":null,"AdminPin?":null,"UserPin?":null,"PricePerMin?":null,"PricePerConf?":null,"ReminderStart?":null,"AdminJoin?":null,"CreatedOn?":"2011-11-17 13:31:15","CreatedBy?":"1"}}}
How can I marshal this "device" object using ContainerFactory??
{
}I have a input stream (bound to a socket), that continuously gives multiple JSON docs, and I want to parse it one by one. If i use json-simple, it will stop at the beginning of the second JSON doc ('{') with an exception.
How can I parse multiple JSON docs in one stream??
[{"name":"1350458288_11294.jpg","size":775702,"type":"image\/jpeg","url":"http:\/\/localhost\/oto\/files\/1350458288_11294.jpg","thumbnail_url":"http:\/\/localhost\/oto\/thumbnails\/1350458288_11294.jpg","delete_url":"http:\/\/localhost\/oto\/?file=1350458288_11294.jpg","delete_type":"DELETE"}]
whats is Json ?
Hi, i have a question about reading json file. In file I have several lines. Each line with {}. How can i parse some data from each line in a loop? Thanks a lot!!
I just added this class util to my project:
import org.json.simple.JSONObject;
public class JSONObjectUtil {
}In this way, retrieving a value from a JSONObject is pretty simple:
JSONObjectUtil.retrieveJsonPath(json, "glossary/GlossDiv?/GlossList?/GlossEntry?/SortAs?")
A lot of people are asking for recursive iteration. This is my solution.
A class with static print methods for printing and navigating through Map and List objects.
A ContainerFactory? returning the types I want:
And you can call it like this:
Thanks for JSON recursive function. I need to manipulate(add,delete and update) my JSON string and then reform the JSON object
{ }