Thursday, January 31, 2008

SOA Testing-A look at products you can use

SOA is such a buzzword since late 2006 ,that it made me sit up and realize its virtue & importance in Enterprise application development.And having been involved with SOA for a while now,like implementing couple of applications,somewhere down the line I was thinking about how best I can test my SOA application put together in enterprise integration.And as expected by Gartner that by mid 2009 ,nearly 80% of the enterprises development and integration budgets & effort will be devoted to SOA applications alone,that doubled my efforts to get to know about some products and tools for effectively(both unit & integration) testing my SOA applications suite,be it the front end,services layer ,message exchange(ESB layer) and database ,thats when I came across "LISA",the product I would like to share with everyone and give a brief overview.

iTKO LISA,The LISA 4 Complete SOA Test Platform is a comprehensive testing solution that reduces the effort and cost of SOA test creation and maintenance, while allowing your team to drastically improve quality levels over time through test reuse.LISA will carry developers, QA teams and business analysts from unit testing, to regressions, functional testing, end-to-end integration, load testing, and monitoring after deployment.LISA is a complete no-code software testing solution that supports the entire team's development lifecycle for dynamic web apps, web services, ESB messaging layers, enterprise Java, .NET, data and legacy objects, and more.

Why you need LISA?

To deliver quality SOA applications, enterprises need to achieve Complete, Collaborative, and Continuous TM SOA testing (The "Three C's"):

  1. Complete testing of business workflows across every heterogeneous technology layer of the SOA, at both a system and component level.
  2. Collaborative involvement of the whole team in quality. Enable both developers and non-programmers to define and share test cases that prove how the SOA meets business requirements during the entire application lifecycle.
  3. Continuous validation of a changing SOA deployment as it is consumed at runtime, ensuring that business requirements will continue to be met as the system dynamically changes.
LISA was built to specifically meet the above criteria.
  • Continuous testing at every phase. LISA can be used to automate unit and functional tests early in development, then leverage the same tests in regression builds, load tests and performance monitors that run over time. In addition to typical test data such as pass/fail or response times, LISA gives you real functional validation every time you test. So LISA gives you "black box" testing, "white box" testing, and every color "box" in between.
  • Every technology. LISA tests web applications, but can go way beyond "screen deep" in validating whether or not they are working as planned. In the same test case as your web apps, you automatically gain the protocols to directly instrument web services, messaging layers, databases, J2EE servers and more, without coding. And LISA's extensibility to test your custom apps is second to none.
  • At any scale. LISA supports incredibly complex, multidimensional test workflows with drastically reduced effort and very low system overhead. Using LISA Load/Performance Server, you can schedule your tests to automatically run at any time interval, or trigger them with any system event. And LISA's efficient load engines can simulate hundreds, or 100s of thousands of users, with full functional validation and reporting of every transaction.
There are three different versions of LISA,here is an overview of each of them
1)LISA Enterprise Edition SOA Testing -This edition provides provides a test client to directly invoke and verify business requirements at the service component level, at the integration layer, and across the entire workflow that makes up a business process.Using this,you can do functional testing of your web user interfaces and the data residing below the UI.And it does provide support for ensuring interoperability, predictable project delivery and end-to-end quality for all major integration platforms, including IBM MQ-Series, TIBCO, webMethods, Oracle FUSION, Sun JCAPS, Sonic MQ, Fiorano and other leading providers along with Business Process validation.
2)LISA Server-LISA Server automates and schedules LISA test cases and suites, providing sophisticated staging, user simulation and continuous build and performance test orchestration capabilities for a constantly evolving SOA application environment.LISA allows for testing individual components, process, and workflows during design and development, during integration, and in deployment. Capability for Individual functional tests and system-wide business processes load testing.
3)LISA Extension Kit(LEK)-Many complex enterprise applications, even those that are based on open standards, are built within a custom framework or developed on non-supported platforms that LISA does not yet know about. Using LEK allows LISA users to test custom systems just as natively - without writing test code - as the existing technologies that LISA tests out of the box.

And coming to the benefits ,LISA provide in SOA testing,here is the brief synopsis.

SOA Platform Support enhancements provided by LISA include:

  • ESB Native Integrations: When you need to test an ESB, you should to be able to test every aspect of your integration layer - and LISA natively tests against every access point of ESB systems in ways that no other solution can, whether those are JMS messages, Web Services running on the bus, or connection databases. LISA provides out-of-the-box support for IBM MQ-Series, TIBCO, webMethods, Oracle FUSION, BEA AquaLogic, Sun JCAPS, Sonic MQ, Fiorano and other leading ESB/integration providers.
  • Governance Platform support: LISA tests provide an excellent way to check in SOA Tests as enforceable and verifiable SOA Policy example alongside services in the repository. Leading enterprise and public sector customers are using LISA as a quality certification platform to ensure trust across multiple services, and the divisions and teams that build, support, and leverage them. LISA supports both the process of Publishing services to a larger community with verifiable service levels, as well as Consuming services with well-defined requirements.
  • Virtual Endpoint Testing & Lookup: As SOA Governance practices evolve, the Registry/Repository is becoming a system of reference for flexibly building and managing the services that inhabit a loosely coupled environment. LISA interoperates with Type 2 and 3 UDDI registries and service repositories from leading providers such as CentraSite, Infravio, Systinet, BEA/Flashline, and others, providing a way to leverage them for lookup of the most recent services during test design, and dynamic "hookup" of services during test runs through the registry.
And it does provide support for RIA(Rich Internet Applications) testing, SOAP/WSDL testing,JMS testing to name few more benefits.

Having gone through the docs and some webinars myself,I became more curious to get to know about this product and Iam in the process of evaluating this great product by iTKO and I do feel most of the SOA developers/architects should have a look at this product and see how best it reduces the bottlenecks and decrease head aches for SOA initiation.It does have user forums and lots of white papers and documentation to make you feel at home and help you out in choosing best practices ,installation guide,faqs etc

Note : LISA is a java application and runs on a JRE 1.4.2.x or 1.5.x LISA has production customers on Windows, LINUX, HPUX, AIX, Solaris and OSX

Suggested Reading

LISA Webinars

Java Boutique's Article


Saturday, January 26, 2008

Importance of "Java Messaging Service" (JMS API) in Server Side Development

According to me Server-side programming & developing server-side components is the most challenging area for any Java Developer.Gone are the days for simpler client/server applications,and with emerging technologies like SOA,ESB(Enterprise Service Bus),BPM(Business Process Management) being considered by many large scale companies be it in financial sector or any other domain for their applications which are mainly n-tier applications,the efforts need to be put in designing and developing middle-tier components has increased and if you see ,most of the open source frameworks main focus is on Server-side programming,being a critical component in any n-tier application.And one API which has made my life easy in that regard is Java Messaging Service(JMS).

In my initial days(Iam talking about 9 years back) with Java,while working on different assignments I used different technologies like RMI,Corba and EJB for developing such applications and having seen the growth and emergence of different frameworks and APIs,I would have loved to start my IT career ,may be couple of years back but those struggling days have definitely made me sit up and realize the importance and advantages these new technologies are helping me out in design and development particularly in integration of enterprise applications in distributed computing arena with regards to developing highly scalable,reliable,loosely coupled messaging systems and providing asynchronous interactions
among J2EE components and legacy systems capable of messaging,compared to tightly coupled integration & legacy I was part of in my initial days.

Ok lets analyze the strides made by JMS API and the benefits & advantages it provides to any server side developer or architect.

What is Messaging?
Messaging is a method of communication between software components or applications.A messaging system is a peer-to-peer facility: A messaging client can send messages to, and receive messages from, any other client. Each client connects to a messaging agent that provides facilities for creating, sending, receiving, and reading messages. Messaging enables distributed communication that is loosely coupled. A component sends a message to a destination, and the recipient can retrieve the message from the destination. However, the sender and the receiver do not have to be available at the same time in order to communicate. In fact, the sender does not need to know anything about the receiver; nor does the receiver need to know anything
about the sender. The sender and the receiver need to know only what message format and what destination to use. In this respect, messaging differs from tightly coupled technologies, such as Remote Method Invocation (RMI) or RPC which require an application to know a remote application’s methods.


Overview
Enterprise messaging products (or as they are sometimes called, Message Oriented Middleware products) are becoming an essential component for integrating intra-company operations. They allow separate business components to be combined into a reliable, yet flexible, system.
In addition to the traditional MOM vendors, enterprise messaging products are also provided by several database vendors and a number of internet related companies.Java language clients and Java language middle tier services must be capable of using these messaging systems. JMS provides a common way for Java language programs to access these systems.the JMS API defines a common set of interfaces and associated semantics that allow programs written in the Java programming language to communicate with other messaging implementations.
And thats where JMS is useful and if you add Message Driven Beans(MDB) (combination of Session beans and JMS clients) ,introduced as part of EJB2.0 Specification,then you have an effective asynchronous messaging system in your middletier which covers areas like
security, concurrency, transaction etc.

The JMS API enables communication that is not only loosely coupled but also
• Asynchronous. A JMS provider can deliver messages to a client as they arrive;a client does not have to request messages in order to receive them.
• Reliable. The JMS API can ensure that a message is delivered once and only once. Lower levels of reliability are available for applications that can afford to miss messages or to receive duplicate messages.

Why do you need JMS ?
There are several reasons why you need JMS in a enterprise middleware application :
i)Decoupling : Different parts of an application can be developed such that they are not closely tied to each other
ii)Flexible Integration:Conversely loosely coupled systems can be put together by using MDB for wrapping existing systems.
iii)Separation of elements involved in messaging applications,for example a logging event can be separated from ,say retrieving a customer details in a bank application.
iv)Delivering low-level services of an application in offline-mode,meaning although the service must be provided, the main workflow doesnt have to wait for its completion,for example a logging service
v)Delivering same information to multiple-parties

vi)Most of the top J2EE application servers support messaging.Sun provides a list of JMS vendors at http://java.sun.com/products/jms/vendors.html.

Some myths & drawbacks about using JMS
1)Additional overhead in handling messages and messaging systems
2)Message-based system can be a single point of failure.

Different types of Messages
There are two main ways to send messages: point-to-point and publish-subscribe.
A point-to-point messaging(queue) application has a single producer (usually) and a single consumer. The producer produces messages while the consumer consumes them.A point-to-point system can actually have multiple producers,but usually only a single consumer. For example a print server,any machine on the network can send a print job to a particular print server.

The publish-subscribe messaging model (pub-sub) is more of a broadcast-oriented model. Publish-subscribe is based on the idea of topics and typically has many consumers-and potentially many producers as well.For example, subscription to any forum

Fundamentals in JMS API
The basic elements in JMS are administered objects(connection factories & destinations),connections,sessions,message producers,message consumers ,queues ,topics and messages.

A connection factory is the object a client uses to create a connection with a provider.A connection factory encapsulates a set of connection configuration parameters that has been defined by an administrator. A pair of connection factories come preconfigured with the J2EE SDK and are accessible as soon as you start the service.Each connection factory is an instance of either the QueueConnectionFactory or the TopicConnectionFactory interface.

A message producer is an object created by a session and is used for sending messages to a destination. The Point-to-Point form of a message producer implements the Queue-Sender interface. The pub/sub form implements the TopicPublisher interface.

A message consumer is an object created by a session and is used for receiving messages sent to a destination. A message consumer allows a JMS client to register interest in a destination with a JMS provider. The JMS provider manages the delivery of messages from a destination to the registered consumers of the destination.The PTP form of message consumer implements the QueueReceiver interface.The pub/sub form implements the TopicSubscriber interface.

Different JMS Message Types
The JMS API defines five message body formats, also called message types, which allow you to send and to receive data in many different forms and provide compatibility with existing messaging formats.
i)TextMessage : A java.lang.String object (for example, the contents of an Extensible Markup Language file).
ii)MapMessage : A set of name/value pairs, with names as String objects and values as primitive types in the Java programming language. The entries can be accessed sequentially by enumerator or randomly by name. The order of the entries is undefined.
iii)BytesMessage : A stream of uninterpreted bytes. This message type is for literally encoding a body to match an existing message format.
iv)StreamMessage: A stream of primitive values in the Java programming language,filled and read sequentially.
v)ObjectMessage :A Serializable object in the Java programming language.
vi)Message : Composed of header fields and properties only. This message type is useful when a message body is not required.

In order to use JMS, one must have a JMS provider that can manage the sessions and queues. There are free, open source and proprietary providers.

Articles about open source providers:

Articles about proprietary providers:

Suggested Reading

Importance of JMS
Benefits of JMS



Wednesday, January 23, 2008

Aspect Oriented Programming-Spring AOP

In continuation of the series of articles on Spring Framework,today lets explore the possibilities and solutions that can be achieved by using Aspect Oriented Programming,AOP and in particular Spring AOP.To start with I would like to delve upon core concepts related to AOP ,the myths and realities surrounding it,benefits over OOP(Object Oriented Programming) and the areas where it can be used effectively.

History of AOP?
object-oriented programming (OOP) introduced the concept of the object, which initiated a new way to structure applications and write programs. The same idea applies to the concept of the aspect.In 1996, Gregor Kiczales and his team at the Palo Alto Research Center (PARC), a subsidiary of Xerox Corporation that is located in California, originally defined the concept of the aspect.
Definition of aspect
An aspect is a common feature that's typically scattered across methods, classes, object hierarchies, or even entire object models. It is behavior that looks and smells like it should have structure, but you can't find a way to express this structure in code with traditional object-oriented techniques

AOP vs OOP
The aim behind the development of OOP was to organize the data of an
application and its associated processing into coherent entities. Doing so was achieved by having objects that encapsulate data along with the methods that manipulate the data and carry out the processing.From a conceptual point of view, an application is broken down according to the realworld objects that it models. In a stock-management application, for example, you might find
supplier, article, customer, and other types of objects.By grouping together all the objects that possess the same characteristics, the concept of a class complements the concept of the object.
OOP has undeniably improved software engineering.Developers have built more-complex programs in a simpler fashion than would have been possible through procedural programming. Furthermore, developers have written large applications in object-oriented languages. For example, the Java 2 Platform, Enterprise Edition (J2EE) application servers were programmed in the Java language. Similarly, developers have implemented complex class hierarchies to construct graphical user interfaces. The Swing API,included in Java 2 Platform, Standard Edition (J2SE), falls into this category.AOP simply adds new concepts that allow you to improve object-oriented applications by making them more modular. In addition, AOP streamlines the development process by allowing the separation of development tasks. For example, highly technical functionalities, such as security.AOP allows us to dynamically modify our static model to include the code required to fulfill the secondary requirements without having to modify the original static model

Benefits over OOP

Writing clear and elegant programs using only OOP is impossible in at least two cases when the application contains crosscutting functionalities, and when the application includes code scattering.Let us explore a bit on these limitations.

Cross-cutting concerns
While organizing an application into classes,the analysis must be driven by the need for separating and encapsulating the data and its associated processing into coherent entities.Although the classes are programmed independently of one another, they are sometimes behaviorally interdependent. Typically, this is the case when you implement rules of referential integrity. For example, a customer object must not be deleted while an outstanding order remains unpaid; otherwise, the program risks losing the contact details for that customer. To enforce this rule, you could modify the customer-deletion method so that it initially determines whether all the orders have been paid. However, this solution is deficient for several reasons:• Determining whether an order has been paid does not belong to customer management but to order management. Therefore, the customer class should not have to manage this
functionality. The customer class should not need to be aware of all the data-integrity rules that other classes in the application impose. Modifying the customer class to take these data-integrity rules into account restricts the possibilities of reusing the class in other situations. In other words, once the customer class implements any functionality that is linked to a different class, customer is no longer independently reusable, in many cases. Despite the fact that the customer class is not the ideal place to implement this referential-integrity rule, many object-oriented programs work this way for lack of a better solution. You might be thinking about integrating this functionality into an order class instead,but this solution is no better. No reason exists for the order class to allow the deletion of a customer. Strictly speaking, this rule is linked to neither the customers nor the orders but cuts across these two types of entities.One of the aims of dividing the data into classes is making the classes independent from one another. However, crosscutting functionalities, such as the rules of referential integrity,appear superimposed on the division—violating the independence of the classes. In other words, OOP does not allow you to neatly implement crosscutting functionalities.

Code-Scattering
In OOP, the principal way that objects interact is by invoking methods. In other words, an object that needs to carry out an action invokes a method that belongs to another object. (An object can also invoke one of its own methods.) OOP always entail two roles: that of the invoker and that of the invoked.When you write the code to call a method, you do not need to worry about how the service is implemented because the call interacts only with the interface of the invoked object. You need only ensure that the parameters in the call correspond to those of the method’s signature.Because methods are implemented within classes, you write each method as a block of code that is clearly delimited. To change a method, you obviously modify the file that contains the class where the method is defined. If you alter just the body of the method, the modification is transparent because the method will still be called in exactly the same way.However, if you change the method’s signature (for example, by adding a parameter),further implications arise. You must then modify all the calls to the method, hence you must modify any classes that invoke the method. If these calls exist in several places in the program,
making the changes can be extremely time-consuming.The main point is this: Even though the implementation of a method is located in a single class, the calls to that method can be scattered throughout the application. This phenomenon of code scattering slows down maintenance tasks and makes it difficult for object-oriented applications to adapt and evolve.

Ofcourse we can use design patterns like Observer & Decorator in OOP to handle such scenerios like cross-cutting concerns and code scattering.And the advantages associated by implementing them are enhanced modularity of the code and effective testing.In general Aspects are used to implement functionalities (security, persistence, logging, and so on) within an application.An aspect allows you to integrate crosscutting functionalities and code scattering into an object-oriented application by using the new concepts of the pointcut, the joinpoint, and the advice.

Pointcut
Its the point of execution in the application at which cross-cutting concern needs to be applied.
In aspect-oriented computer programming, a pointcut is a set of join points. Whenever the program execution reaches one of the join points described in the pointcut, a piece of code associated with the pointcut (called advice) is executed. This allows a programmer to describe where and when additional code should be executed in addition to an already defined behavior. This permits the addition of aspects to existing software, or the design of software with a clear separation of concerns, wherein the programmer weaves (merges) different aspects into a complete application. In Spring the a pointcut is just a set of methods that, when called, should have advices invoked around them. This is the second important pieces of a Spring AOP aspect implementation!

Join Points
A join point is a point in the control flow of a program. In aspect-oriented programming a set of join points is described as a pointcut. A join point is where the main program and the aspect meet(such as field access, method invocation , constructor invocation, etc.). Spring's built-in AOP only supports method invocation currently.

Advice
In aspect and functional programming, advice describes a class of functions which modify other functions when the latter are run; it is a certain function, method or procedure that is to be applied at a given join point of a program.

Here is the example ,The aspect ibelow parodies the traditional "Hello World" for AspectJ by providing an aspect that captures all calls to the void foo(int, String) method in the MyClass class.

A simple HelloWorld aspect in AspectJ

public aspect HelloWorld
{
pointcut callPointcut( ) :
call(void foo(int, String));
before( ) : callPointcut( )
{
System.out.println(
"Hello World");
System.out.println(
"In the advice attached to the call pointcut");
}
}

OK having understood the history behind AOP and the terminology associated with it,lets explore the subject for today,Spring AOP

AOP basics

  • Aspect: A modularized implementation of a software concern that cuts across various objects in a software implementation. Logging is a good example of an aspect. In Spring AOP, aspects are nothing more than regular Spring beans, which themselves are plain-old Java objects (POJO) registered suitably with the Spring Inversion of Control container. The core advantage in using Spring AOP is its ability to realize the aspect as a plain Java class.

  • Join point: A point during program execution, such as a method executing or an exception being handled. In Spring AOP, a join point exclusively pertains to method execution only, which could be viewed as a limitation of Spring AOP. However, in reality, it is enough to handle most common cases of implementing crosscutting concerns.

  • Advice: Information about "when" an aspect is to be executed with respect to the join point. Examples of types of advice are "around," "before," and "after." They specify when the aspect's code will execute with respect to a particular join point.

  • Pointcut: A declarative condition that identifies the join points for consideration during an execution run. Pointcut is specified as an expression. Spring AOP uses the AspectJ pointcut expression syntax. An example pointcut expression is: execution(* com.myorg.springaop.examples.MyService*.*(..)). Asterisks in the expression refer to wildcards, as is conventiona

Spring AOP

The Spring Framework integrates with more powerful AOP frameworks, such as AspectJ .To use Spring AOP, you need to implement cross-cutting concerns and configure those concerns
in your applications.Any advice written for Spring AOP is configurable in the Spring container through a simple, consistent configuration. This configuration is an important aspect of using AOP in Spring because it is the only one you need to remember for creating extension points to existing classes.For further understanding I recommend reading the following articles ,instead of me touching upon the Spring AOP concepts here again,as anyways they are self-explanatory.

Spring AOP

Implementing Spring AOP in Enterprise Applications

Implementing Cross Cutting concerns using Spring 2.0 AOP

Implementing Logging as an Aspect using Spring AOP Framework

Suggested Video Tutorial by Ramnivas Laddad on AOP(Author of AspectJ in Action)

GoogleTech Talks Video by Gregor Kiczales,who introduced Aspects



Monday, January 21, 2008

Transaction Management using Spring Framework

In the previous article of this series related to Spring Framework,we discussed how easy it is to Unit Test the POJOs in Spring and as well little introduction on how effective is Spring in implementing Transaction Management .In continuation of that article,today we will explore more about configuring Transaction Management in Spring for different datasources

Transaction Management in Spring

Sometimes the fundamental error in judgement a Java Developer makes when dealing with transactions are using global transactions instead of local transactions.First let us understand these two types of transactions.

What is a transaction ?
A database transaction is a unit of interaction with a database management system or similar system that is treated in a coherent and reliable way independent of other transactions. In general, a database transaction must be atomic, meaning that it must be either entirely completed or aborted. Ideally, a database system will guarantee the properties of Atomicity, Consistency, Isolation and Durability (ACID) for each transaction.

Purpose of Transaction

In database products the ability to handle transactions allows the user to ensure that integrity of a database is maintained.

A single transaction might require several queries, each reading and/or writing information in the database. When this happens it is usually important to be sure that the database is not left with only some of the queries carried out. For example, when doing a money transfer, if the money was debited from one account, it is important that it also be credited to the depositing account. Also, transactions should not interfere with each other. For more information about desirable transaction properties, see ACID.

In order to reflect the correct state of reality in the system, a transaction should have the following properties.

  • Atomicity: This is the all-or-nothing property. Either the entire sequence of operations is successful or unsuccessful. A transaction should be treated as a single unit of operation. Completed transactions are only committed and incomplete transactions are rolled back or restored to the state where it started. There is absolutely no possibility of partial work being committed.
  • Consistency: A transaction maps one consistent state of the resources (e.g. database) to another. Consistency is concerned with correctly reflecting the reality of the state of the resources. Some of the concrete examples of consistency are referential integrity of the database, unique primary keys in tables etc.
  • Isolation: A transaction should not reveal its results to other concurrent transactions before it commits. Isolation assures that transactions do not access data that is being concurrently updated. The other name for isolation is serialization.
  • Durability: Results of completed transactions have to be made permanent and cannot be erased from the database due to system failure. Resource managers ensure that the results of a transaction are not altered due to system failures.

A simple transaction is usually issued to the database system in a language like SQL in this form:

  1. Begin the transaction
  2. Execute several queries (although any updates to the database aren't actually visible to the outside world yet)
  3. Commit the transaction (updates become visible if the transaction is successful)

If one of the queries fails the database system may rollback either the entire transaction or just the failed query. This behaviour is dependent on the DBMS in use and how it is set up. The transaction can also be rolled back manually at any time before the commit.

Local vs Global Transactions
Local transactions are specific to a single transactional resource (a JDBC connection, for example), whereas global transactions are managed by the container and can span multiple transactional resources.

Unlike a centralized computing environment where application components and resources are located at a single site, and transaction management only involves a local data manager running on a single machine, in a distributed computing environment all the resources are distributed across multiple systems. In such a case transaction management needs to be done both at local and global levels. A local transaction is one which involves activities in a single local resource manager. A distributed or a global transaction is executed across multiple systems, and its execution requires coordination between the global transaction management system and all the local data managers of all the involved systems. The Resource Manager and Transaction Manager (TM), also known as a transaction processing monitor (TP monitor), are the two primary elements of any transactional system. In centralized systems, both the TP monitor and the resource manager are integrated into the DBMS server. To support advanced functionalities required in a distributed component-based system, separation of TP monitor from the resource managers is required.

Local transactions are easy to manage, and because most operations work with just one transactional resource (such as a JDBC transaction), using local transactions is enough. However, if you are not using Spring, you still have a lot of transaction management code to write, and if in the future the scope of the transaction needs to be extended across multiple transactional resources, you have to drop the local transaction management code and rewrite it to use global transactions.

A global or distributed transaction consists of several subtransactions and is treated as a single recoverable atomic unit. The global transaction manager is responsible for managing distributed transactions by coordinating with different resource managers to access data at several different systems. Since multiple application components and resources participate in a transaction, it's necessary for the transaction manager to establish and maintain the state of the transaction as it occurs.Global transactions in non-Spring application are, in most cases, coded using JTA, which is a complex API that depends on JNDI. This means that you have to use a J2EE application server

Two-Phase Commit (2PC) Protocol

The two-phase commit protocol enables the Atomicity in a distributed transaction scenario. The system module responsible for this protocol is usually called a transaction manager or a coordinator. As the name implies, there are two phases to the protocol. In the first phase, the coordinator asks each participant to vote on a commit or a rollback. This is accomplished by sending a so-called prepare request to each participant. When a participant votes for a commit, it loses its right to roll back independently, meaning that it has to wait for the final outcome received from the coordinator. The first phase ends when the coordinator receives all votes or if a timeout occurs. The second phase starts with the final decision made by the coordinator. In the case of a timeout or at least one "rollback" vote, the decision to roll back is sent to each participant that voted for "commit" in the first phase. As a result, all data modifications at all places involved are rolled back. Should all participants vote to commit, then and only then, the coordinator decides to perform a global commit and sends a commit notification to all participants. Consequently, all the work at all places is committed.

The complexity of the two-phase commit relates not only to the distributed nature of a transaction, but also to a possible non-atomic outcome of a transaction, i.e. heuristics. For example, the first participant may commit changes during phase two, while the second participant encounters a hardware failure when saving changes to the disk. Being able to roll back or at least notify the errors to recover the system into the original state is an important part of the process.

By persisting intermediate steps of the 2PC, that is, logging abort, ready to commit, and commit messages, the protocol provides a certain degree of reliability in case the coordinator or participants fail in the midst of transaction processing. The two-phase commit protocol can be implemented in a synchronous or asynchronous manner with variations to its actual execution.

Programmatic vs Declarative Transactions
The Java EE container implements support for transactions and facilitates the ACID properties required by the application logic. The container provides an implementation for the two-phase commit protocol between a transaction manager and underlying resources such as the database or messaging provider. The Java EE container is also responsible for the transaction context propagation and provides support for a distributed two-phase commit. With a distributed two-phase commit, a Java EE application can modify data across multiple application servers as if it is a single transaction.

The decision for whether to use programmatic or declarative transaction support depends on the level of transaction control and complexity required by the application design. With the declarative transaction support boundaries and individual properties of a transaction are specified in a deployment descriptor . With programmatic support to a transaction application logic encapsulates transactional characteristics in the code. A POJO object has to use the programmatic transaction demarcation. And because of Spring's Inversion of Control and dependency injection CMT is now also available for POJO applications.

Container-Managed Transactions(EJB 2.1) vs. Spring’s Declarative Transaction management(Spring 1.2):

EJB’s Transaction attribute:
Required
RequiresNew
Mandatory
NotSupported
Supports
Never

Spring’s Progagation behavior:
Interface org.springframework.transaction.TransactionDefinition
PROPAGATION_MANDATORY
PROPAGATION_NESTED
PROPAGATION_NEVER
PROPAGATION_NOT_SUPPORTED
PROPAGATION_REQUIRED
PROPAGATION_REQUIRES_NEW
PROPAGATION_SUPPORTS

(b)
EJB’s Isolation level:
Interface java.sql.Connection
TRANSACTION_NONE
TRANSACTION_READ_COMMITTED
TRANSACTION_READ_UNCOMMITTED
TRANSACTION_REPEATABLE_READ
TRANSACTION_SERIALIZABLE

Spring’s Isolation level:
Interface org.springframework.transaction.TransactionDefinition
ISOLATION_DEFAULT
ISOLATION_READ_COMMITTED
ISOLATION_READ_UNCOMMITTED
ISOLATION_REPEATABLE_READ
ISOLATION_SERIALIZABLE

(c)
Rolling Back a Container-Managed Transaction
There are two ways to roll back a container-managed transaction.
First, if a system exception is thrown, the container will automatically roll back the transaction. Second, by invoking the setRollbackOnly method of the EJBContext interface, the bean method instructs the container to roll back the transaction. If the bean throws an application exception, the rollback is not automatic but can be initiated by a call to setRollbackOnly.

Spring’s Roll back rules
Transaction can be declared to roll back or not based on exceptions that are thrown during the course of the transaction.
By default, transactions are rolled back only on runtime exceptions and not on checked exceptions.

Bean-Managed Transactions(EJB 2.1) vs. Spring’s Programmatic Transaction management(Spring 1.2):
In a bean-managed transaction, the code in the session or message-driven bean explicitly marks the boundaries of the transaction. An entity bean cannot have bean-managed transactions; it must use container-managed transactions instead.
Although beans with container-managed transactions require less coding, they have one limitation: When a method is executing, it can be associated with either a single transaction or no transaction at all. If this limitation will make coding your bean difficult, you should consider using bean-managed transactions.

Spring provides two means of programmatic transaction management:
Using the TransactionTemplate
Using a PlatformTransactionManager implementation directly

Configuration of Deployment Descriptor-Spring vs EJB

ejb-jar.xml

<ejb-jar>
<enterprise-beans>
<!– A minimal session EJB deployment –>
<session>
<ejb-name>PostingEJB</ejb-name>
<home>ejbs.PostingHome</home>
<remote>ejbs.Posting</remote>
<ejb-class>ejbs.PostingBean</ejb-class>
<!– or Stateless –>
<session-type>Stateful</session-type>
<transaction-type>Container</transaction-type>
</session>

<!– OPTIONAL, can be many. How the container is to manage
transactions when calling anEJB’s business methods –>
<container-transaction>
<!– Can specify many methods at once here –>
<method>
<ejb-name>EmployeeRecord</ejb-name>
<method-name>*</method-name>
</method>
<!– NotSupported|Supports|Required|RequiresNew|Mandatory|Never –>
<trans-attribute>Required</trans-attribute>
</container-transaction>

<container-transaction>
<method>
<ejb-name>Payroll</ejb-name>
<method-name>*</method-name>
</method>
<trans-attribute>Required</trans-attribute>
</container-transaction>
</assembly-descriptor>
</ejb-jar>

Spring applicationcontext.xml

<beans>
<bean id=”petStore”
class=”org.springframework.transaction.interceptor.TransactionProxyFactoryBean”>
<property name=”transactionManager” ref=”txManager”/>
<property name=”target” ref=”petStoreTarget”/>
<property name=”transactionAttributes”>
<props>
<prop key=”insert*”>
PROPAGATION_REQUIRED,-MyCheckedException</prop>
<prop key=”update*”>
PROPAGATION_REQUIRED</prop>
<prop key=”*”>
PROPAGATION_REQUIRED,readOnly</prop>
</props>
</property>
</bean>
</beans>

The following are the key differences from EJB CMT (from Introduce the Spring Framework)
a. Transaction management can be applied to any POJO. We recommend that business objects implement interfaces, but this is a matter of good programming practice, and is not enforced by the framework.

b. Programmatic rollback can be achieved within a transactional POJO through using the Spring transaction API. We provide static methods for this, using ThreadLocal variables, so you don’t need to propagate a context object such as an EJBContext to ensure rollback.

c. You can define rollback rules declaratively. Whereas EJB will not automatically roll back a transaction on an uncaught application exception (only on unchecked exceptions, other types of Throwable and “system” exceptions), application developers often want a transaction to roll back on any exception. Spring transaction management allows you to specify declaratively which exceptions and subclasses should cause automatic rollback. Default behaviour is as with EJB, but you can specify automatic rollback on checked, as well as unchecked exceptions. This has the important benefit of minimizing the need for programmatic rollback, which creates a dependence on the Spring transaction API (as EJB programmatic rollback does on the EJBContext).

d. Because the underlying Spring transaction abstraction supports savepoints if they are supported by the underlying transaction infrastructure, Spring’s declarative transaction management can support nested transactions, in addition to the propagation modes specified by EJB CMT (which Spring supports with identical semantics to EJB). Thus, for example, if you have doing JDBC operations on Oracle, you can use declarative nested transactions using Spring.

You cannot control the atomicity, consistency, and durability of a transaction, but you can control the transaction propagation and timeout, which set whether the transaction should be read-only and specify the isolation level.

Spring encapsulates all these settings in a TransactionDefinition interface. This interface is used in the core interface of the transaction support in Spring, the PlatfromTransactionManager, whose implementations perform transaction management on a specific platform, such as JDBC or JTA. The core method, PlatformTransactionManager.getTransaction(), returns a TransactionStatus interface, which is used to control the transaction execution, more specifically to set the transaction result and to check whether the transaction is read-only or whether it is a new transaction.

Exploring the TransactionDefinition Interface

As we mentioned earlier, the TransactionDefinition interface controls the properties of a transaction.

package org.springframework.transaction;
import java.sql.Connection;
public interface TransactionDefinition {
int getPropagationBehavior();
int getIsolationLevel();
int getTimeout();
boolean isReadOnly();
}

Using the TransactionStatus Interface

This interface allows a transactional manager to control the transaction execution. The code can check whether the transaction is a new one, or whether it is a read- only transaction and it can initiate a rollback.

public interface TransactionStatus {
boolean isNewTransaction();
void setRollbackOnly();
boolean isRollbackOnly();
}

Implementations of the PlatformTransactionManager

This is an interface that uses the TransactionDefinition and TransactionStatus interfaces to create and manage transactions. The actual implementations of this interface must have detailed knowledge of the transaction manager. The DataSourceTransactionManager controls transactions performed within a DataSource; HibernateTransactionManager controls transactions performed on a Hibernate session, JdoTransactionManager manages JDO transactions, and JtaTransactionManager delegates transaction management to JTA

Configuring Spring’s Transaction Manager for JDBC
To set up transaction management for your applications, you need to configure the transaction
manager of your choice. The simplest way to start is to use the DataSourceTransactionManager. It’s suitable when working with JDBC or iBATIS.

Configuring DataSourceTransactionManagement

<beans>
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value="classpath:jdbc.properties"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource. DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
</beans>

In the most straightforward scenario, DataSourceTransactionManager will obtain a new Connection object from the DataSource object and bind it to the current thread when a
transaction starts. It will remove the Connection object from the current thread when the transaction ends and commit or roll back the active transaction, as necessary, and close the Connection object.
Configuring Spring’s Transaction Manager for JTA
An alternative transaction-management strategy is to use a JTA transaction manager. All application servers come with such a transaction manager, although some stand-alone implementations exist.You don’t automatically need to use JTA when deploying applications in an application server.Nothing stops you from using DataSourceTransactionManager, which gives you the advantage of more independence from the deployment environment.
However, in a minority of cases, you want to delegate transaction management to the JTA
transaction manager of your application server. The most common reason for this is to work with distributed transactions.

Setting Up Transaction Management via JTA in Spring
<beans>
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:env/jdbc/myDataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.jta.JtaTransactionManager"/>
</beans>

When working with the JTA transaction manager of an application server, you must use a
DataSource object that was obtained via JNDI from the same application server. JtaTransactionManager needs no special configuration. This is because the JTA transaction
manager in the application server will automatically start and end the transactions on
Connection objects that were obtained from the JNDI DataSource object.

Bringing Advanced Transaction Capabilities to Spring Applications
This article discusses Spring's transaction management facilities and the common use cases in Spring where an external transaction manager is required. A real-world application is used to illustrate the transactional aspects and features. The focus is on leveraging JTA transaction management in the Spring Framework for enterprise applications. The article shows how Spring's transaction services can seamlessly expose and interact with a Java EE application server's transaction manager such as the Oracle Application Server and the OC4JJtaTransactionManager.

Coming up Next in this series related to "Spring Framework" is, Aspect Oriented Programming(AOP).

Suggested Reading :

http://www.oracle.com/technology/tech/java/spring/jta_spring_article.pdf

http://www.infoq.com/minibooks/JTDS

Pro Spring Book preview

http://dev2dev.bea.com/pub/a/2005/07/spring_transactions.html

http://www.itvcon.com/read/400907_1.htm

Suggested Video Tutorial


Mission Critical Transaction Management using Spring


Friday, January 18, 2008

What makes "Spring Framework" tick,and that too in a big way-Part 1

The most telling aspect or area where Java Developers & for that matter the architects, are heavily tested is developing and designing middleware & server-side components.And I have developed many such kind of applications using EJB in the past.And even though there has been lot of growth from EJB2.1 to EJB3.0 specifications(particularly,annotations),the areas where personally I had a hard time using it has been unit testing the code and implementing effective transaction management.Some might argue ,Iam blaming it because I cant use it effectively and Iam not at all questioning its virtue and value and Iam still and will use it,being a standard framework defined by the Java Community Process (JCP) and supported by all major J2EE vendors,my focus is how best other lightweight solutions like Spring,Hibernate or for that matter Google Guice(even though I didnt get an opportunity to use & implement Guice in any enterprise applications so far),has made my life easier with respect to TDD and implementing Transaction management,security and persistence in distributed computing.

Lightweight vs Heavyweight frameworks

A software framework is a re-usable design for a software system (or subsystem). A software framework may include support programs, code libraries, a scripting language, or other software to help develop and glue together the different components of a software project. Various parts of the framework may be exposed through an API.

Lightweight framework : Applications developed using lightweight framework do not have to depend on framework interfaces or abstract classes to "hook" components into the application in which category frameworks like Spring,Google Guice fall into compared to Heavyweight frameworks ,that require the extension of framework classes or the implementation of framework interfaces in order to take advantage of their middleware features. EJB2 is probably the most popular example of a heavyweight framework(And I do know the growth and support in EJB3 for POJOs,discussed later,which makes it lightweight and starting to become popular because of EJB3 Annotations along with JPA ).EJB is a heavyweight model for objects that don’t need to offer remote access.

Test first development has become much more popular in the last few years, and usually produces impressive results. Writing effective unit tests for an application isn’t just a question of putting in the time and effort; it can be severely constrained by high-level architecture. This is one of the biggest frustrations with EJB. Due to its heavy dependence on the EJB container, business logic coded in EJB is very hard to test.Code that is hard to test is usually also hard to re-engineer, reuse in different contexts, and refactor(though one might argue with me citing the existence of lightweight frameworks like MockEJB)and considering testability is an essential characteristic of agile projects.

Spring Framework
Spring is an open source project led by SpringSource & Interface21 and brainchild of Rod Johnson.Spring is a light-weight framework for the development of enterprise-ready applications. Spring can be used to configure declarative transaction management, remote access to your logic using RMI or web services, mailing facilities and various options in persisting your data to a database. Spring framework can be used in modular fashion, it allows to use in parts and leave the other components which is not required by the application.The Spring framework is a layered architecture consisting of seven well-defined modules.Here is a brief description of the seven modules,

Inversion of Control (IoC) Container: Also called the Core Container, creates and configures application objects and wires them together. This means that resources and collaborating objects are provided to objects, so the objects do not need to look them up. This moves an important responsibility out of your code and makes it easier to write and test code.

Aspect-Oriented Programming (AOP) framework:Works with cross-cutting concerns—one solution to a problem that’s used in multiple places. The Spring AOP framework links cross-cutting concerns to the invocation of specific methods on specific objects (not classes) in such a way
that your code is unaware of their presence. The Spring Framework uses cross-cutting concerns and AOP to let your application deal with transactions without having a single line of transaction management code in your code base.

Data Access framework: Hides the complexity of using persistence APIs such as JDBC,Hibernate, and many others. Spring solves problems that have been haunting data-access developers for years: how to get hold of a database connection, how to make sure that the connection is closed, how to deal with exceptions, and how to do transaction management. When using the Spring Framework, all these issues are taken care of by the framework.

Transaction Management framework: Provides a very efficient way to add transaction management to your applications without affecting your code base. Adding transaction management is a matter of configuration, and it makes the lives of application developers much easier.Spring Framework simplifies it dramatically.And along with Unit Testing,this is the area I would be dwelling upon here.

Resource Abstraction framework: Offers a wonderful feature for conveniently locating files when configuring your applications. Chapter 2 discusses resource abstraction.

Validation framework: Hides the details of validating objects in web applications or rich client applications. It also deals with internationalization (i18n) and localization (l10n).

Spring Web MVC(Another significant framework,one of my fav): Provides a Model-View-Controller (MVC) framework that lets you build powerful web applications with ease. It handles the mapping of requests to controllers and of controllers to views. It has excellent form-handling and form-validation capabilities, and integrates with all popular view technologies, including JSP, Velocity, FreeMarker, XSLT,JasperReports, Excel, and PDFs.

Spring Web Flow:Makes implementing web-based wizards and complex workflow processes very easy and straightforward. Spring Web Flow is a conversation-based MVC framework.Spring Web Flow is distributed separately and can be downloaded via the Spring Framework website.

Acegi Security System: Adds authentication and authorization to objects in your application using AOP. Acegi can secure any web application, even those that do not use the Spring Framework.It offers a wide range of authentication and authorization options that will fit your most exotic security needs. Adding security checks to your application is straightforward and a
matter of configuration; you don’t need to write any code, except in some special use cases.Acegi is distributed separately and can be downloaded from http://acegisecurity.org/downloads.html.

Remote Access framework: Adds client-server capabilities to applications through configuration.Objects on the server can be exported as remotely available services. On the client, you can call these services transparently, also through configuration. Remotely accessing services
over the network thus becomes very easy. Spring’s Remote Access framework supports HTTPbased protocols and remote method invocation (RMI), and can access Enterprise JavaBeans as a client.

Spring Web Services: Takes the complexity out of web services and separates the concerns into manageable units. Most web service frameworks generate web service end points and definitions based on Java classes, which get you going really fast, but become very hard to manage as your project evolves. To solve this problem, Spring Web Services takes a layered approach and
separates the transport from the actual web service implementation by looking at web services as a messaging mechanism. Handling the XML message, executing business logic, and generating an XML response are all separate concerns that can be conveniently managed. Spring Web Services is distributed separately and can be downloaded via the Spring Framework website

Spring JMX: Exports objects via Java Management Extensions (JMX) through configuration.Spring JMX is closely related to Spring’s Remote Access framework. These objects can then be managed via JMX clients to change the value of properties, execute methods, or report statistics.
JMX allows you to reconfigure application objects remotely and without needing to restart the application.

OK lets start exploring the easiness ,robustness & scalability features being supported by Spring in Unit Testing and Transaction Management of the enterprise applications.

Test Driven Development with Spring Framework
To understand its use, we need to know what a POJO(Plain Old Java Object) is,POJO is a Java Object that doesn't implement any special interfaces or call any framework classes,remember the definition of Lightweight Framework.The benefits of POJO comes from decoupling the application code from the infrastructure frameworks.POJOs accelerate development, we can test our business logic outside of the application server(important criteria in TDD) and without a database.We dont have to package the code and deploy it in the application server and also we dont have to keep the database schema constantly in sync with the object model or spend time waiting for slow-running database tests to finish.
The Spring framework,provides services for POJOs such as transaction management,dependency injection,support for POJO remoting and security for POJOs(Acegi).

Dependency Injection
Dependency Injection is one of the core features of Spring Framework.
Dependency injection (DI) refers to the process of supplying an external dependency to a software component and is a specific form of inversion of control where the concern being inverted is the process of obtaining the needed dependency.

Conventionally, if an object needs to gain access to a particular service, the object takes responsibility to get hold of that service: either it holds a direct reference to the location of that service, or it goes to a known 'service locator' (in EJB2)and requests that it be passed back a reference to an implementation of a specified type of service. By contrast, using dependency injection, the object simply provides a property that can hold a reference to that type of service; and when the object is created a reference to an implementation of that type of service will automatically be injected into that property - by an external mechanism. The dependency injection approach offers more flexibility because it becomes easier to create alternative implementations of a given service type, and then to specify which implementation is to be used via a configuration file, without any change to the objects that use the service. This is especially useful in unit testing, because it is easy to inject a mock implementation of a service into the object being tested. On the other hand, excessive use of dependency injection can make applications more complex and harder to maintain: in order to understand the application's behaviour the developer needs to look at the configuration as well as the code, and the configuration is "invisible" to IDE-supported reference analysis and refactoring unless the IDE specifically supports the dependency injection framework.To use it,you must configure Spring's bean factory

Benefits of Dependency Injection


It eliminates the need to call lookup APIs,because a components dependencies are passed to it,we no longer have to write tedious JNDI code and components depend mainly on interfaces rather than on concrete implementations.

Types of dependency injection
Constructor injection,setter&getter injection and Method injection.

Spring’s org.springframework.test package has
number of base classes to simplify testing.
AbstractDependencyInjectionSpringContextTests - can do both setter or field-based dependency injection,cached context files
AbstractTransactionalDataSourceSpringContextTests -
allows you to easily clear data from tables and rolls back any data entered into the database.

Testing approaches
Generally speaking, testing should be used to ensure the following aspects of an application:
Correctness: You want to ensure the correctness of your application. For example, suppose that you have written a calculate() method on a Calculator class. You want to make sure that certain input for this method results in a correct calculation result.
Completeness: Testing can be used to ensure that your application is complete by verifying that all required operations have been executed. Suppose you have a signup process that includes creating an invoice for newly signed-up members. You want to test whether a member is actually added to the database, and also if an invoice has been created for that user.
Quality: Testing can ensure the quality of your application, and this goes beyond software quality metrics. A well-tested piece of software creates confidence with developers. When existing code needs to be changed, it’s less likely that developers will be afraid of unintentionally
breaking the software or reintroducing bugs.

Unit Testing
This so-called plain old Java object (POJO) approach to Java coding, combined with defining interfaces for important parts of your application provides the basis for thorough testing. The Spring Framework promotes exactly this approach (we could also say the Spring Framework makes this approach possible). By separating your code into well-defined interfaces and objects, you have already defined the units that are eligible for unit testing.
One goal of unit testing is to ensure that each unit of an application functions correctly in isolation.Another goal is to define a framework, harness, or contract (all referring to a strict set of rules that must be respected) that must be satisfied by the unit test. As long as the tests can be run successfully,
the unit is considered to work properly. (If there are bugs in the test code, the unit will function properly according to this buggy test code.)

Unit testing offers a number of benefits for developers:
Facilitate change: As previously mentioned, having a set of unit tests for a specific piece of code provides confidence in refactoring existing code. The unit tests will ensure the module continues to function correctly according to the available tests as long as the tests succeed.
Given there are enough tests for all the code in the application, this promotes and facilitates changing implementation details of units in the application. An important aspect of facilitating change is preventing solved problems or bugs from reentering the code.
Simplify integration: Unit testing provides a bottom-up testing approach, which ensures that low-level units function properly according to their tests. This makes it easier to write integration tests at a later stage. There is no need to have two tests for units.
Promote well-defined separation: In order for you to be able to completely and efficiently write unit tests, your code needs to be separated into well-defined units. Each unit needs to be tested in isolation and should therefore allow the replacement of dependencies with test specific ones. Thus, writing unit tests promotes the separation of your application into well-defined units.

Test-driven development (TDD) is a way of implementing code by writing a unit test before implementing the actual class.
Using Spring for Testing
When working with Spring for building your applications, you will typically use one or more XML configuration files for defining your application context. These configuration files are not Java files and will therefore not be compiled. Of course, if you include the Spring DTD or use Spring’s namespace support, some aspects of your configuration files will be validated. But issues such as defining a nonexisting class as the class for a bean in your application context or setting a nonexisting property on a bean definition are discovered only when you actually load the application context at runtime. This is where integration testing comes into the picture.The goal of integration testing is to test how the individually tested units of your application collaborate with each other. When working with Spring, you wire those dependencies together using Spring’s configuration files. In order to test part of your whole application, you typically want to load the Spring application context and test one or more beans configured in that application.

Spring provides three convenient test base classes for integration testing of your Spring applications: AbstractDependencyInjectionSpringContextTests, AbstractTransactionalSpringContextTests, and AbstractTransactionalDataSourceSpringContextTests. These classes are discussed in the following sections.

The org.springframework.test.AbstractDependencyInjectionSpringContextTests base class is the test class you will typically use when testing parts of your application that do not require access to a database or any other transactional support. You should extend this class by first implementing the getConfigLocations() method, which should return an array of application context locations to be loaded by the test. When the test is executed, the specified configurations will be loaded as an application context.
The major advantage of using this base class is that the application context will be loaded only once for each test method. If you were to load the application yourself in the setUp() method of a test, the application context would be reloaded for every test method. This is especially useful when loading configurations that require a lot of initialization, such as a Hibernate session factory.

Another convenient test base class is org.springframework.test.AbstractTransactionalSpringContextTests, which builds on top of the functionality offered by the AbstractDependencyInjectionSpringContextTests test base class. Each test method that is executed by a subclass of this base class will automatically participate in a transaction. Because the default is to roll back
the transaction after each test method execution, no actual modifications will be made to any transactional resources. This makes it the perfect choice for performing integration tests using a transactional data source.
Using this base class allows you to integration test functionality that will make modifications to the database without having to worry about the changes affecting any other test methods. It also allows you test code that requires a transactional context. And you can write data to the database without worrying about cleaning it up afterwards.As mentioned, all modifications to the database are rolled back at the end of each test method execution.

A third convenient test base class is org.springframework.test.AbstractTransactionalDataSourceSpringContextTests, which builds on top of the functionality provided by AbstractTransactional
SpringContextTests. In order to use this base class, you need to include a DataSource definition in the application context loaded by this test. The data source is automatically injected, as explained earlier.
The main feature offered by this base class is that it provides you with a JdbcTemplate as a protected field, which you can use to modify the data source, within the transactional context. You could, for instance, insert some data that the test needs in order to succeed. Because the statements to the JdbcTemplate are also executed within the transactional context, you do not need to worry about cleaning up the database or modifying the existing data.Another advantage of using this base class is that you can define fields for this test that are populated
automatically by Spring based on your application context.Spring also provides support for testing your J2EE-specific application code. Because much of your web application code is very much tied to J2EE classes, it is hard to test. For instance, testing a servlet or a Spring controller implementation requires you to somehow mock the HttpServletRequest and HttpServletResponse classes.

Transaction Management using Spring
Two main issues make database transactions complicated for developers to work with and difficult for database vendors to implement:
Concurrency control: Databases need to protect against data loss or ghost data, yet allow concurrent access to the same data. Generally, developers can choose an isolation level to control the level of protection. Another form of concurrency control is protecting against lost updates.

Synchronization between transactions: Complex applications often need a way to synchronize two or more databases or other resources so that their local transactions are guaranteed to commit or roll back in a group. The technique used for this is called two-phase commit (or 2PC), distributed transactions, or global transactions.
The first step in setting up transaction management with Spring is choosing a transaction management strategy. This basically means selecting which of the transaction-management APIs in Java you want to use.The main interface of this API is org.springframework.transaction.PlatformTransactionManager.
Spring provides a number of implementations of this interface that support the most popular transaction-management APIs in Java.Like,org.springframework.jdbc.datasource. DataSourceTransactionManager,org.springframework.orm.hibernate.HibernateTransactionManager
org.springframework.orm.hibernate3.HibernateTransactionManager.
The Spring Bean factory does more than simply instantiate objects.It can also wrap the objects that it creates with interceptors.These interceptors are how Spring provides a simple,yet effective AOP(Aspect-Oriented Programming)implementation.AOP is the foundation of Spring Transaction Management.To make a POJO transactional you configure the bean factory to wrap it with TransactionInterceptor.One option is to use the @Transactional annotation on the interace,implementation class or individual methods.Another option is to write XML bean definitions that explicitly apply the TransactionInterceptor to the POJO.The XML is more verbose than the annotation but has the advantage of leaving the source code unchanged.It also works with older JDKs that dont' support annotations.

Spring doesn't implement transactions itself but is,instead, a wrapper around other transaction management APIs.Unlike EJB3,it gives you the flexibility of using either JTA or the transaction management APIs provided by ORM frameworks such as Hibernate.

In the next article of this series,we will explore more about Transaction Management in Spring and as well as AOP(Aspect Oriented Programming).

Suggested Video Tutorial

Spring 2.x by Rod Johnson (Founder of Spring framework)

Integration Testing with Spring by Rod Johnson

Spring vs EJB3.0 by Debu Panda


Spring Experience Video by Adrian Colyer
Adrian Colyer is the leader of the AspectJ open source project and a well-known industry expert on the topic of aspect-oriented programming (AOP)



Suggested Reading:


http://java.sys-con.com/read/180374.htm


http://static.springframework.org/spring/docs/2.5.1/reference/testing.html



http://static.springframework.org/spring/docs/1.2.x/reference/transaction.html



http://mike.hostetlerhome.com/2007/07/06/easier-unit-testing-in-spring/



http://dev2dev.bea.com/pub/a/2005/07/spring_transactions.html


Recommended Books



You can get this book from Here

Tuesday, January 15, 2008

Rich Internet Applications - One more Buzz word (Adobe Flex)

Along with SOA,BPM the other buzz word with regards to J2EE Application development in Java community now-a-days is about Rich Internet Applications-RIA.
Personally I do think strategically this is the most important aspect that needs to be covered or looked into by a seasoned Java/J2EE Architect ,that how best he can improve the usability and appearance of the corporate applications he is working on, because ultimately what matters to the users is better,scalable and enhanced GUI.Along with Web Services(SOA),Mobile Computing,BPM(Business Process Management),this is one area where Iam really focussed and I suggest every Senior Java/J2EE guys to consider having a look at.To start with,I guess every Java person ,who is interested in GUI development does try to acquaint himself with technologies with HTML,DHTML,CSS,JavaScript,Java AWT,Swing ,XHTML etc....And there is ever growing interest in building AJAX style web applications using different frameworks and technologies which I will harbor upon in this article down the line.So what is this RIA is all about?

To start with the basic definition of RIA is ,Rich Internet applications (RIA) are web applications that have the features and functionality of traditional desktop applications. RIAs typically transfer the processing necessary for the user interface to the web client but keep the bulk of the data (i.e., maintaining the state of the program, the data etc) back on the application server.
RIAs typically:
* run in a web browser, or do not require software installation
* run locally in a secure environment called a sandbox
Compared to RIAs,traditional web applications centered all activity around a client-server architecture with a thin client. Under this system all processing is done on the server, and the client is only used to display static (in this case HTML) content. The biggest drawback with this system is that all interaction with the application must pass through the server, which requires data to be sent to the server, the server to respond, and the page to be reloaded on the client with the response. By using a client side technology which can execute instructions on the client's computer, RIAs can circumvent this slow and synchronous loop for many user interactions.

All RIAs share one characteristic: they introduce an intermediate layer of code, often called a client engine, between the user and the server. This client engine is usually downloaded at the beginning of the application, and may be supplemented by further code downloads as the application progresses. The client engine acts as an extension of the browser, and usually takes over responsibility for rendering the application's user interface and for server communication.

Benefits
Because RIAs employ a client engine to interact with the user, they are:
* Richer. They can offer user-interface behaviors not obtainable using only the HTML widgets available to standard browser-based Web applications. This richer functionality may include anything that can be implemented in the technology being used on the client side, including drag and drop, using a slider to change data, calculations performed only by the client and which do not need to be sent back to the server, for example, a mortgage calculator.

* More responsive. The interface behaviors are typically much more responsive than those of a standard Web browser that must always interact with a remote server.

The most sophisticated examples of RIAs exhibit a look and feel approaching that of a desktop environment. Using a client engine can also produce other performance benefits:
* Client/Server balance. The demand for client and server computing resources is better balanced, so that the Web server need not be the workhorse that it is with a traditional Web application. This frees server resources, allowing the same server hardware to handle more client sessions concurrently.

* Asynchronous communication. The client engine can interact with the server without waiting for the user to perform an interface action such as clicking on a button or link. This allows the user to view and interact with the page asynchronously from the client engine's communication with the server. This option allows RIA designers to move data between the client and the server without making the user wait. Perhaps the most common application of this is prefetching, in which an application anticipates a future need for certain data, and downloads it to the client before the user requests it, thereby speeding up a subsequent response. Google Maps uses this technique to move adjacent map segments to the client before the user scrolls their view.

* Network efficiency. The network traffic may also be significantly reduced because an application-specific client engine can be more intelligent than a standard Web browser when deciding what data needs to be exchanged with servers. This can speed up individual requests or responses because less data is being transferred for each interaction, and overall network load is reduced. However, use of asynchronous prefetching techniques can neutralize or even reverse this potential benefit. Because the code cannot anticipate exactly what every user will do next, it is common for such techniques to download extra data, not all of which is actually needed, to many or all clients.

Shortcomings and restrictions

Shortcomings and restrictions associated with RIAs are:

* Sandbox. Because RIAs run within a sandbox, they have restricted access to system resources. If assumptions about access to resources are incorrect, RIAs may fail to operate correctly.

* Disabled scripting. JavaScript or another scripting language is often required. If the user has disabled active scripting in their browser, the RIA may not function properly, if at all.

* Client processing speed. To achieve platform independence, some RIAs use client-side scripts written in interpreted languages such as JavaScript, with a consequential loss of performance. This is not an issue with compiled client languages such as Java, where performance is comparable to that of traditional compiled languages, or with Flash movies, in which the bulk of the operations are performed by the native code of the Flash player.

* Script download time. Although it does not have to be installed, the additional client-side intelligence (or client engine) of RIA applications needs to be delivered by the server to the client. While much of this is usually automatically cached it needs to be transferred at least once. Depending on the size and type of delivery, script download time may be unpleasantly long. RIA developers can lessen the impact of this delay by compressing the scripts, and by staging their delivery over multiple pages of an application.

* Loss of integrity. If the application-base is X/HTML, conflicts arise between the goal of an application (which naturally wants to be in control of its presentation and behaviour) and the goals of X/HTML (which naturally wants to give away control). The DOM interface for X/HTML makes it possible to create RIAs, but by doing so makes it impossible to guarantee correct function. Because an RIA client can modify the RIA's basic structure and override presentation and behaviour, it can cause failure of the application to work properly on the client side. Eventually, this problem could be solved by new client-side mechanisms that granted an RIA client more limited permission to modify only those resources within the scope of its application. (Standard software running natively does not have this problem because by definition a program automatically possesses all rights to all its allocated resources).

* Loss of visibility to search engines. Search engines may not be able to index the text content of the application.

* Dependence on an Internet connection. While the ideal network-enabled replacement for a desktop application would allow users to be "occasionally connected" wandering in and out of hot-spots or from office to office, today the typical RIA requires network connectivity.And I think this year will decide whether RIA are here to stay or it goes off like smoke without fire.

Ok enough of talking about RIA,its benefits and shortcomings.Lets talk about what we are really concerned about, which framework or technology is the best for RIA,I think that should be the primary focus now rather than considering the potential or growth of RIA.There are many choices for us and they are:Adobe Flex/Flash,Ajax,JSF(yes it can also be used),JavaFX,Dojo,Swing,Google Web Toolkit,Google Gears,Eclipse RIA & Microsoft Silverlight.I guess with so many choices,the obvious question will be which one is the most efficient and best one for RIA.

Personally I do like Adobe Flex,The Flex SDK comes with a set of user interface components including buttons, list boxes, trees, data grids, several text controls, and various layout containers. Charts and graphs are available as an add-on. Other features like web services, drag and drop, modal dialogs, animation effects, application states, form validation, and other interactions round out the application framework.I do go through its documentation like Flex 2 Developers Guide,Building & Deploying Flex 2 applications,I must say this tool is here to stay.

There are, however, more specific and compelling reasons to adopt Flex. Flex builds on the advances made in the Java/J2EE community over the last decade. New Java converts to the Flex programming model will find the framework, language, and tools easy to learn, as there is a familiarity in the Flex IDE and the language structures, like the Flex Collections API. The Flex development tools offer the clearest link for current Java developers. The Flex IDE is built on Eclipse, which can be used as the stand-alone Flex Builder product or as an Eclipse plug-in. Virtually all Java developers have had some exposure to the Eclipse development environment. This is a huge benefit that speeds and enhances the learning process.

In addition to the IDE, Flex also has Ant Tasks for automated builds of Flex applications, whether they are integrated with a Java application or stand-alone. Once again, Ant is a technology Java developers have all been exposed to in the Java community. With Adobe's comprehensive suite of tools, integrating Flex into your skill set is natural and enjoyable.

In a multi-tiered model, Flex applications serve as the presentation tier. Unlike page-based HTML applications, Flex applications provide a stateful client where significant changes to the view don't require loading a new page. Similarly, Flex and Flash Player provide many useful ways to send and load data to and from server-side components without requiring the client to reload the view. Though this functionality offered advantages over HTML and JavaScript development in the past, the increased support for XMLHttpRequest in major browsers has made asynchronous data loading a common practice in HTML-based development too.

There are frameworks ranging from testing to MVC in Adobe Flex. A good example of this is the Cairngorm MVC framework. Cairngorm is written in ActionScript 3.0 and follows many of the Core J2EE patterns. It provides Flex developers with an MVC framework for structuring their Flex presentation code and the interactions with the business services. Flex also provides API's for multiple methods of integrating with backend services, including LiveCycle Data Services ES (Data Services), Web Services, and HTTP.

LiveCycle Data Services ES deploys in a Servlet/J2EE container and provides significant server-side infrastructure to Flex applications. Data Services offers a number of features, including remoting, publish/subscribe messaging, and data management (client-server synchronization, paging, notification, Hibernate adapter, etc). The simplest use allows a Java developer to expose Plain Old Java Objects (POJO's) as back-end services via a simple configuration. Objects are transferred to the client in a compact binary format called AMF3 and converted to ActionScript 3.0 classes, which is far more efficient in the transferring and rendering time than other implementations, like XML. Combining Data Services with the popular Java POJO-based frameworks, like Hibernate and Spring, creates a truly powerful combination for building enterprise applications.

Things you need to get started with Adobe Flex
Flex Builder 2
Flex Builder 2 is the new development tool that (although it is not necessary for Flex development because any text editor can be used) offers the most complete development environment for rapidly creating Flex applications. Because Flex Builder 2 has been built on top of the mature Eclipse IDE, it will be very familiar to developers who have already been developing software in other languages using Eclipse as their tool of choice. Eclipse is a development environment that was built with extendability in mind, and this can also work to the benefit of Flex developers because it can be customized to suit the needs of the Flex developer. Chapter 2 provides a more comprehensive look into Flex Builder 2.

Flex Free SDK 2
The Flex Software Development Kit (SDK) is a free download from Adobe and includes the Flex framework (component class library), compiler, and debugger. Using the text editor of your choice, you can create the ActionScript and MXML files for your application, and then compile to SWF using the SDK. The SDK allows for data communication via Web services, HTTP services, and Flash remoting with ColdFusion as the back-end server.

Flex Data Services (FDS)
FDS is the server version of Flex that must be installed under any Java 2 Enterprise Edition (J2EE) server. FDS is essentially the next generation of Flex 1.5, which was sold only as a server.

As well as a full commercial edition, the FDS server is available in an Express edition, which is free and available for commercial use (but is limited to one CPU without clustering).

FDS is made up of several components and capabilities, including the following:
Flex Messaging Services (FMS)
Publish-subscribe messaging
Data push
RPC services
Flex Messaging Services
FMS is one of the pieces that make up FDS, and it allows for the creation of applications that support real-time messaging, as well as collaboration. The messaging service has support for Java Message Service (JMS), as well as other existing messaging services, allowing for the creation of cross-platform chat applications.

Publish-Subscribe Messaging
FMS uses the producer/consumer publish-subscribe metaphor, which allows for the creation of co-browsing applications. To understand what is meant by a co-browser application, imagine a company’s customer service representative being able to make edits to a form on the user’s screen in real time while the user watches.

Data Push
Data push is the capability for the server to push data changes to the client without any user interaction or polling of the servers. This can be critical when an application has hundreds or thousands of users connected, and they can all see changes to business-critical data in real time. This is a feature that is available only when using the FDS server.

RPC Services
Remote procedure call (RPC) services are the suite of services used to connect to external data. They include the following:

WebService - The WebService component can be used to access any Web service that complies with the WSDL 1.1 standard and returns data formatted as Simple Object Access Protocol (SOAP) messages over HTTP.

HTTPService - The HTTPService component can send HTTP GET, POST, HEAD, OPTIONS, PUT, TRACE, or DELETE requests. It does not support multipart requests.

RemoteObjects - The RemoteObject component uses Action Message Format (AMF) to transfer data that is a binary format and is the fastest of the RPC services. It is an ideal way to interact with server-side Java objects that are within the FDS server’s source path. It can also be used to connect to local or even remote ColdFusion servers. Both Java objects and ColdFusion components can be mapped to ActionScript objects, allowing for seamless integration between the server-side and client-side objects.

The WebService and HTTPService components are included for free with the Flex SDK, whereas the RemoteObject component is available only for use with the FDS server (Commercial or Free Express Edition) or ColdFusion 7.02.

Flex Charting
The Flex charting components are a set of rich charting components that enable easy development of professional dashboard and business intelligence (BI) systems. The charting components are sold as a stand-alone product or bundled with Flex Builder 2

Flex 2 Programming Model
The Flex 2 Programming Model consists of MXML, ActionScript, and the Flex class library. To build a full-fledged application, you must have a good knowledge of all these technologies. To start off, you are introduced to the basics of MXML. Then you are shown how ActionScript and MXML work together to create powerful, rich user interfaces.

The third element of the programming model is the Flex 2 Framework, which contains Flex components, managers, and behaviors. This component-based development model allows developers to incorporate pre-built components, extend the component library by creating new components, or combine pre-built components to create composite components.

MXML
The first element of the programming model, MXML, is an XML language that defines the user interface for an application. MXML is also used to define non-visual aspects such as server-side data sources and bindings between the user interface and the server side.

To write a Flex application, you must be able to write MXML and ActionScript. As mentioned earlier, MXML is an XML language that is used to lay out your user interface.

MXML is very similar to HTML in that it provides tags for user interface elements. If you have worked with HTML, then MXML should be very familiar to you. MXML has a much broader tag set than HTML, and defines visual components such as data grids, buttons, combo boxes, trees, tab navigators, and menus, as well as non-visual components, Web service connections, data binding, and effects.

The biggest difference between HTML and MXML is the MXML-defined file that is compiled into a Shockwave file (SWF) and rendered by the Flash Player. MXML can be written in a single file or in multiple files. MXML requires that you close off every tag that you declare in your Flex application. Otherwise, the Flex compiler will throw an error.

The tag is always the root tag of a Flex application. The tag defines a Panel container that includes a title, title bar, status message, border, and an area for its children (that is, the control). The tag is a Label control that simply displays text.

ActionScript
The second element of the programming model, ActionScript, extends the functionality of a Flex application. ActionScript provides control and object-manipulation features that are not available in strict MXML.

The ActionScript programming language is used in Adobe’s Flash Player. Included are built-in objects and functions that allow you to create your own objects and functions like many object-oriented programming (OOP) languages.

ActionScript 3.0 offers a robust programming model that is more familiar to developers with basic knowledge of OOP. ActionScript is executed by the ActionScript Virtual Machine (AVM), which is part of the Flash Player. The code is compiled into a bytecode format by the compiler, such as the one built into Flex Builder 2.

ActionScript and JavaScript are very similar, so being familiar with JavaScript will make your life easier when using ActionScript. JavaScript is derived from The European Computers Manufacturers Association (ECMA) document ECMA-262, which is the international standard for the JavaScript language. ActionScript is based upon the ECMA-262 Edition 4 specification, so this is where the similarities come in between the two languages.

ActionScript 3.0 contains a lot of new features, such as run-time exceptions, run-time types, sealed classes, method closures, ECMAScript for XML (E4X), regular expressions, namespaces, and new primitive types. All these new features allow the developer to speed up the development process.

One important point to note is that Flex Builder 2 converts all MXML files into ActionScript before the code is compiled into bytecode for use with the AVM.

ActionScript in MXML
Flex developers will find that ActionScript extends the capabilities of Flex when developing applications. In ActionScript, you can define event listeners, create new classes, handle callback functions, and define getters and setters, packages, and components.

Following are some of the ways that ActionScript can be used in Flex applications:

ActionScript is inserted between the tags. This code can be comprised of new functions, error handling, or events, and can perform any other tasks your Flex application may require.

Create new components in ActionScript.

Create new components and compile into SWC files (which are external components compiled to be reused in many applications).
Extend existing components with ActionScript classes.
Take advantage of ActionScript support for OOP concepts such as code reuse, inheritance, encapsulation, and polymorphism.
Use global ActionScript functions that are available in any part of an SWF file where ActionScript is used, or in any user-defined class.
Use the standard Flex components that comprise the Flex Class Libraries. All the components and libraries are written in ActionScript.

Overview of ActionScript Compilation Process
A Flex application can consist of MXML files, ActionScript classes, SWF files, and SWC files.

The Flex compiler transforms the main MXML file and its children into a single ActionScript class linking in all the reference-imported classes. Once the transformation is complete, the end result is a single SWF file that can be deployed on the server.

Every time you make changes to your application, a new SWF file is generated. You may find it more convenient to remove the reusable components and compile those into SWC files, which can be included in a library path for your application.

Statements and expressions must be wrapped in functions; otherwise, you will receive a compiler error. Also, you cannot define classes or interfaces in the tag, because the MXML file you have defined is a class in itself.

You may have noticed the odd CDATA construct. This is used to tell the compiler not to interpret the script block as XML, but rather as ActionScript. The tag must be located at the top-level component tag of the MXML file. Multiple tags can be defined, but for readability, it’s recommended to have them in one location.

In MXML, you can use ActionScript to refer to visual or non-visual components. You do this by specifying the id property of the component in MXML. Then you use id to refer to the component in ActionScript.

If you want to access the control, you must refer to the id property in ActionScript. To do this, the following code must be used:
var str:String = lbl1.text;

The code gets the value from the Label control named lbl1.

Import and Include
There is a noticeable difference between the terms “importing” and “including” in reference to ActionScript. Importing is adding a reference to a class file or package so that you can access the objects and properties of the external classes, as shown in the following example:

import mx.controls.TextInput;

Including ActionScript is copying the lines of code of an external ActionScript file into another. This can either be done in the tag using the include directive, or the tag to add the ActionScript into your Flex applications.

Introduction to ActionScript Components
This section provides a brief overview of how to create reusable components using ActionScript. Later in this book, you will build more complex components using MXML and ActionScript.

Custom components can contain graphical elements, define some kind of business logic, or extend existing components in the Flex framework. Defining your own components in ActionScript has several advantages:

Divide your application into individual models that can be developed and maintained separately.

Implement commonly used logic within the components.

Build a collection of components that can be reused among all your Flex applications.

When creating custom components, you may find it useful to extend the component from the Flex class hierarchy. This allows you to inherit functionality already built into the Flex components.

As shown in the following example, you can define a custom TextInput and derive it from the Flex TextInput control:


package myComponents
{
public class MyTextInput extends TextInput
{
public function MyTextInput()
{
...
}
...
}
}

The filename of the control MyTextInput must be in a filename called MyTextInput.as and stored at the root of your Flex application in a subdirectory name myComponents. As you may have already picked up, the package name reflects the location of the component, myComponents.

Flex Class Library
The Flex 2 Framework contains Flex managers, components, and behaviors. The component-based model allows developers to incorporate pre-built components, create new ones, or combine components into composite components.

The following lists some of the common packages that you will use in your Flex application:
mx.controls - Flex user interface controls

mx.collections - Flex collection components

mx.charts - Flex charting components

mx.utils - Flex utility classes

flash.events - Flex event classes

flash.net - Flex classes for sending and receiving from the network, such as URL downloading and Flash Remoting

Just have a look at the Adobe Developer Connection,http://www.adobe.com/devnet/ria/ and you will surely agree that it is the most useful tool and for samples visit here,http://www.adobe.com/devnet/flex/index.html?tab:samples=1