Monday, February 18, 2008

Developing Secure Web Services

Web Services,SOA,everywhere.Well,it certainly has so much positive impact on current IT community and proving its worth being adopted by leading industry players such as IBM, HP, Oracle, Microsoft, Novell, and Sun for different Web Services products.Yes Iam talking about SOA-Web Services which are here to stay and look set to dominate the deployment of major enterprise applications in the coming years.


IBM’s definition of Web Services states that “Web Services are self-contained, modular applications that can be described, published, located, and invoked over a network, generally, the World Wide Web.” In one of my previous articles,Web Services Patterns we looked at the advantages provided by SOA applications.
When the definition refers to Web Services being invoked over the World Wide Web, it means that they use HTTP as the transport layer and an XML-based message layer. However, Web Services do not actually require HTTP—XML-formatted data can be sent over other transport protocols (message queuing, for example), which may be more suited to mission-critical transactions.

Web Services generally uses the HTTP and SSL ports (TCP ports 80 and 443, respectively) in order to pass through firewalls. In the early days of “Web Services,” vendors would say that their products were “firewall compliant.” This meant that firewalls would not block the Web Services traffic, whereas CORBA traffic attempting to use CORBA-specific ports may be blocked. Web Services make it easier to deploy distributed computing without having to open firewall ports, or having to “punch a hole in the firewall” as network administrators like to say. This “under the radar” deployment has serious security implications. Most firewalls are unable to distinguish Web Services traffic, traveling over HTTP and SSL ports, from Web browser traffic.
The word “Services” in Web Services refers to a Service-Oriented Architecture (SOA). SOA is a recent development in distributed computing, in which applications call functionality from other applications over a network. In an SOA, functionality is “published” on a network where two important capabilities are provided— “discovery,” the ability to find the functionality, and “binding,” the ability to connect to the functionality. In the Web Services architecture, these activities correspond to three roles: Web Service provider, Web Service requester, and Web Service broker, which correspond to the “publish,” “find,” and “bind” aspects of a Service-Oriented Architecture.

Web Services security focuses on the application layer, although security at the lower layers remains important. The implementation technologies on which we focus are HTTP and SOAP, although we will keep SMTP security in mind also since SOAP can be bound to SMTP as well as HTTP.It may not seem immediately obvious why security for SOAP presents such a challenge. After all, SOAP is generally bound to HTTP, which already has SSL for authentication and confidentiality. In addition, many Web authorization tools already exist. It is a reasonable question to ask why these aren’t enough, and the answer is made up of a number of reasons.The first reason is that, although frequently bound to HTTP, SOAP is independent of the underlying communications layers. Many different communications technologies can be used in the context of one multi-hop SOAP message; for example, using HTTP for the first leg, then SMTP for the next leg, and so forth. End-to-end security cannot therefore rely on a security technology that presupposes one particular communications technology. Even in the case of a single SOAP message targeted at a Web Service, transport-level security only deals with the originator of the SOAP request. SOAP requests are generated by machines, not by people. If the Web Service wishes to perform security based on the end user, it must have access to authentication and/or authorization information about the end user on whose behalf the SOAP request is being sent. This is the second reason for Web Services security.

SOAP is a technology used to enable software to talk to other software much easier than was previously possible. End users (that is, humans) do not make SOAP messages themselves. However, if access to the Web Service is to be decided based on the information about the end user, the Web Service must have access to the information that allows it to make this authorization decision. This information does not have to include the end user’s actual identity.How can this information about the end user be conveyed to the Web Service? Session layer or transport layer security between the application server and the Web Service doesn’t convey information about the identity of the end user of the Web Service. It merely conveys information about the application server that is sending the SOAP message. It may be the case that many of the requests to the Web Service originate from that application server.This challenge is addressed by including security information about the end user in the SOAP message itself. This information may concern the end user’s identity, attributes of the end user, or simply an indication that this user has already been authenticated and/or authorized by the Web server. This information allows the Web Service to make an informed authorization decision.This scenario is likely to be widespread where many Web Services are used to implement functionality “behind the scenes.” It shouldn’t be the case that the end user has to reauthenticate each time a SOAP request must be sent on their behalf. The challenge of providing this functionality is sometimes called “single sign-on” or “federated trust.”

WS-Routing provides a means for SOAP messages to route between multiple Web Services. WS-Routing defines how to insert routing information into the header of a SOAP message. This routing information can be thought of as equivalent to routing tables that operate at lower layers of the OSI stack for routing IP packets.WS-Routing means that one SOAP message may traverse multiple SOAP “hops” between the originator and the endpoint. The systems that implement these hops may have nothing in common apart from the ability to parse and route a SOAP message.When routing between Web Services, the requirement for confidentiality can apply from the originator through to the final SOAP endpoint. It may be a requirement that information be kept secret from SOAP intermediaries. There may be a chance that intermediaries may disclose the information either deliberately or through leaving “gaps” between one transport-level security session and the next. While the data is decrypted, it is vulnerable. This is the same problem that plagued the first release of the Wireless Access Protocol (WAP), in which data was decrypted in between the wireless encryption session and encryption on the fixed wire. This so-called “WAP gap” caused a loss of confidence in WAP security and was addressed in later releases of the WAP specification. Implementing encryption only at the transport level makes a “SOAP gap.”It is often noted that most security breaches happen not while data is in transit, but while data is in storage. This is the principle of least resistance—attempting to decrypt eavesdropped encrypted data from an SSL session is much more difficult than simply testing if a Web site maintainer has remembered to block direct access to the database where the credit card numbers are stored. If decrypted data is stolen from a database, the consequences are no less dramatic. Once data has reached its final destination, it must be stored in a secure state. Confidentiality for a SOAP transaction should not involve simply chaining instances of confidentiality together, since “SOAP gaps” of unencrypted data are available between each decryption and encryption.

Web Services Security Specifications
Confidential information in a SOAP message should remain confidential over the course of a number of SOAP hops.A number of industry specifications have been developed for this purpose. These specifications can be organized into two distinct categories:A standardized framework to include XML-formatted security data into SOAP messages.Standards for expressing security data in XML format. This security information should be used for the high-level principles of security: confidentiality, authentication, authorization, integrity, and so forth.

WS-Security
WS-Security has emerged as the de facto method of inserting security data into SOAP messages. Work on WS-Security began in 2001, was published by Microsoft, VeriSign, and IBM in April 2002, and was then submitted in June 2002 to the OASIS standards body in order to be made into an industry standard. WS-Security defines placeholders in the SOAP header in order to insert security data. It defines how to add encryption and digital signatures to SOAP messages, and then a general mechanism for inserting arbitrary security tokens. WS-Security is “tight” enough to present the definitive means of including security data into SOAP messages, but is “loose” enough to not place limits on what that security data can be.

XML Encryption
XML Encryption is a specification from the W3C. It provides not only a way of encrypting portions of XML documents, but also a means of encrypting any data and rendering the encrypted data in XML format. XML Encryption makes encryption functionality easier to deploy.XML Encryption is not a replacement for SSL. SSL is still the de facto choice for confidentiality between two entities that are communicating using HTTP. However, if the security context extends beyond this individual HTTP connection, XML Encryption is ideal for confidentiality. The capability to encrypt XML is nothing new, because XML is just text after all. However, the ability to selectively encrypt XML data is what makes XML Encryption so useful for Web Services. Encrypting an entire SOAP message is counterproductive, because the SOAP message must include enough information to be useful—routing information, for example. Selectively encrypting data in the SOAP message is useful, however. Certain information may be hidden from SOAP intermediaries as it travels from the originator to the destination Web Service.XML Encryption does not introduce any new cryptography algorithms or techniques. Triple-DES or RSA encryption may still be used for the actual encryption. XML Encryption provides a way to format the meta-information about which algorithm was used, and when the encryption occurred. This aids the Web Service in decrypting the data, provided the decryption key is available to it. This is important, because prior to XML Encryption the only standardization of encryption data was for e-mail messages (that is, S/MIME). If an organization wished to send encrypted data to another organization, both organizations would have to agree on the format of the encrypted data, how and which algorithms to use, and possibly also how to send an encrypted key. Now that information can be contained in an XML Encryption block.

XML Signature
XML Signature is a specification produced jointly by the W3C and the Internet Engineering Task Force (IETF). Like XML Encryption, it does not only apply to XML. As well as explaining how to digitally sign portions of an XML document, XML Signature also explains how to express the digital signature of any data as XML. As such, it is an “XML-aware digital signature.” PKCS#7 is a means of rendering encrypted data, and signed data, which predates XML Signature and XML Encryption. Rather than using XML, it uses Abstract Syntax Notation number 1 (ASN.1). ASN.1 is a binary format, renowned for its complexity. Producing or verifying a PKCS#7 signature requires not just cryptography software, but also an ASN.1 interpreter. XML Signature also requires cryptography software, of course, but an XML DOM replaces the ASN.1 interpreter.
The power of XML Signature for Web Services is the ability to selectively sign XML data. For example, a single SOAP parameter passed to a method of a Web Service may be signed. If the SOAP request passes through intermediaries en route to the destination Web Service, XML Signature ensures end-to-end integrity.WS-Security describes how to include XML Signature data in a SOAP message. An important feature of XML Signature is that it can be very selective about what data in an XML instance is signed. This feature is particularly useful for Web Services. For example, if a single SOAP parameter needs to be signed but the SOAP message’s header needs to be changed during routing, an XML Signature can be used that only signs the parameter in question and excludes other parts of the SOAP message. Doing so ensures end-to-end integrity for the SOAP parameter while permitting changes to the SOAP’s header information.

Security Assertions Markup Language (SAML) provides a means of expressing information about authentication and authorization, as well as attributes of an end user (for example, a credit limit) in XML format. SAML data may be inserted into a SOAP message using the WS-Security framework. SAML is used to express information about an act of authentication or authorization that has occurred in the past. It does not provide authentication, but can express information about an authentication event that has occurred in the past; for example, "User X authenticated using a password at time Y.” If an entity is authorized based on the fact that they were previously authorized by another system, this is called “portable trust.” SAML is important to address the challenge of multihop SOAP messages also, because separate authentication to each Web Service is often out of the question. By authenticating once, being authorized, and effectively reusing that authorization for subsequent Web Services, single sign-on for Web Services can be achieved.Note that this information in a SAML assertion may not indicate the end user’s identity. The user may have authenticated using a username and password, and the administrator of the Web site may have no idea of the user’s actual identity. It may simply be an indication that the user presented credentials and was authenticated and authorized. SAML allows information to be placed into a SOAP message to say “this person was authorized according to a certain security policy at a certain time." If the recipient of this SOAP message trusts the issuer of the SAML data, the end user can also be authorized for the Web Service. This SAML data is known as an “assertion” because the issuer is asserting information about the end user. The concept of security assertions has existed before SAML, and is already widely used in existing software.

XML Access Control Markup Language (XACML) is designed to express access control rules in XML format. Although the two technologies are not explicitly linked, XACML may be used in conjunction with SAML. An authorization decision expressed in a SAML assertion may have been based on rules expressed in XACML.

Microsoft’s Passport technology takes a different approach to single sign-on. The user authenticates to the passport infrastructure, either directly through www.passport .com or through an affiliate site that makes use of functionality provided by passport.com. Once the user is authenticated and authorized by Passport, their authentication status is also available to other Web Services that use Passport. Like SAML, this provides single sign-on. However, the model is different, relying on a central point of authentication rather than SAML’s architecture where authentication happens at an individual Web Service. By being implemented at the site of the Web Service itself, SAML authentication and authorization information may be based on role-based security. Role-based security means that access to resources is based on the user’s organizational role; for example, in a medical setting doctors may have access to certain information while nurses have access to different information.

In this article(from Web Services Security),I tried to just give an overview on the different specifications that can be used to achieve Web Services security,for a detailed understanding,I suggest get a good book like this one,"Web Services Security",to whom I owe these excerpts,and get a clear understanding as to implementing and using specifications for securing the web services you develop.And thanks to ,Mark O'Neil,for his inputs and allowing me to use some stuff from his book.

Recommended books :












Suggested Reading

WS-Security Specification


Web Services Standards & Specifications

Web Services Security-by IBM


Implementing Service Firewall Pattern



Suggested Video Tutorial

Secure and Reliable Web Services-by InfoQ

SAML-An Overview

Web Services Attacks & Defense Strategies

Thursday, February 14, 2008

Different Design Patterns for Web Services-An Overview


Web Services provide an important building block for integrating disparate computing platforms and, indirectly, provide a mechanism to integrate their global value chains. You can build Web Services after the system was originally deployed, making them similar in many ways to today's EAI software, but you can also build them along with new software as the open Application Programming Interface (API) to the application.

Advantages of using Web Services
Using Web Services , you can build an API which islanguage- neutral and platform-neutral format, programmers can access data from one system and quickly move it to the other through the Web Service. There are several strengths to this approach:

  • Programmers can write the data-transfer programs in any language or platform with which they are comfortable.

  • The source and target systems can control the requests and updates of data in such a way that they do not interfere with a running system.

Consider the number of ways that a simple problem, such as notifying interested parties of a change to an object's state, are solved with a platform such as Java 2 Standard Edition. Some developers may use an intermediate file to track changes, with interested parties reading the file to find out when and how an object changed. Other developers may construct a point-to-point notification system, or even a set of one-to-many Publish/Subscribe patterns. Some developers may have one type of naming convention for the adding and removing of listeners; other developers may not have any naming convention for the same operations-these are some of the areas where Web Services can be highly useful

Using Design Patterns
Patterns can be applied to any portion of the software cycle which usually involve gathering requirements, creating the architecture, designing the software, and implementing it.Thousands of software design patterns document the common problems encountered by software designers and generic solutions for those problems. For example, an architect may give a system structure that identifies a point in the system where an object change drives listeners to make changes in their own state, or the change kicks off a business process. In these cases, a designer can look and determine that the Publish/Subscribe or the Observer pattern can fulfill the requirements and constraints an architect put on the design. Once identification of patterns is complete, the generic structure given in a pattern drives the design of the particular system structure.

Web Services Design Patterns
The following patterns look at how Web Services implement the service-oriented architecture, how service implementations interact with the Web Service environment, and how to locate and use Web Services:
Service-Oriented Architecture:
The Web Services environment is an architecture implementation known as the service-oriented architecture. There are several different implementations of the service-oriented architecture with Web Services having the most penetration in the industry to date. Implementations of service-oriented architectures stress two attributes: implementation transparency and location transparency. Implementation transparency requires a common system structure that applies equally to all possible underlying service implementations and a neutral mechanism for describing services. Location transparency requires the use of agnostic interfaces.
Architecture Adapter: This pattern expands on the GoF Adapter pattern. Although the GoF Adapter pattern resides in object-oriented programming as a way to adapt an exposed interface from a component to an expected dependent interface from another component, the Architectural Adapter pattern is responsible for allowing two completely separate architectures to interoperate.
Service Directory: In statically bound systems, and even in many dynamic systems, companies assume that their choices for the purchaser of software are the right choices. The Web Service paradigm challenges this tradition. Instead, by creating detailed metadata about a service, a service user should be able to locate your service and use it without application modification. The metadata in a service-oriented architecture includes its interface, location for binding, mechanism for communication, and information about the business that created the service. This pattern goes into depth on the Service Directory patterns that are inherent in the leading service architectures and that you will encounter in Web Services.
Business Object: This pattern discusses the typical structure and contents of a single business object. Although the frequency that you will use a single business object for deployment is low, there are substantial lessons you can learn from the exercise of deploying a business object. As with the first three patterns, this pattern is heavy in discussion around the Web Service environment and lessons you can learn from deploying relatively simple objects.
Business Object Collection: In business, you will rarely find business objects that are not collected. Like the business object itself, handling collections with Web Services yields substantial instructional substance as you learn more about the Web Service environment.
Business Process (Composition): Business systems today revolve more around business processes than around supporting business objects. A business process does not necessarily correlate to a single business object but is more abstract in nature. This pattern looks at business processes and lays a general framework for exposing them as Web Services. The business process is also a form of composition. To achieve a business process, multiple business objects and, often, other business processes and activities must run.
Asynchronous Business Process: A world where all business processes are synchronous would be a fine world to live in for programmers. Unfortunately, most important business processes are not synchronous. Even the most basic business processes, such as fulfilling a book order, run into asynchronous complexities. In introducing the Asynchronous Business Process pattern, you will find many similarities to the relationship between business objects and business object collections
Event Monitor: Often, the burden of determining when events occur in a service lies with the client. There are a variety of reasons for this, such as the service not having a reasonable publish/subscribe interface or the client desiring control of the event determination. This is a common, and relatively simple, design pattern to implement that has well- established roots throughout software history.
Observer: Rather than leaving a client to determine when data changed on a server, it is often more efficient to have the server component tell interested clients when data changes. This is especially true when the server component has a low frequency of updates compared to the frequency that clients will want to check. The Observer pattern formalizes the relationship between one or more clients and a Web Service that contains interesting state. The Web Service delivers events to interested clients when an interesting change occurs. The Gang of Four documented the Observer pattern. This implementation is similar to the original documentation of the pattern, with necessary information about Web Services.
Publish/Subscribe: The Publish/Subscribe pattern [Buschmann] is a heavily used pattern in EAI software as well as in many distributed programming paradigms. The Publish/Subscribe pattern is interesting in the context of the definition of Web Services as application components. Using a topic-based mechanism common in very loosely coupled architectures, you create a stand-alone event service that is, in effect, an application component. The event service forwards published events to subscribers without awareness of the application components that use the event service.
Physical Tiers: Throughout the book and the sample implementations in the chapters, you will use a simple Java-based deployment mechanism built into Apache Axis. Therefore, your components live entirely within the process space that Apache Axis uses. This is not an optimal model for enterprise applications. The model discourages runtime reuse and creates a larger footprint than is necessary. Further, the event patterns produced some interesting challenges for a Web Service environment. A client interested in events from a Web Service often exists in its own process. This pattern discusses Web Service implementations that must, and often should, communicate to other processes for their implementation behavior.
Faux Implementation: One of the most fascinating pieces of the Internet is the ability of someone or something to be able to pretend to be something they are not and actually get away with it. As long as an interface and the behavior of a service implementation is what others expect, there is no way to tell what drives the behavior of the service implementation. The Observer and Publish/Subscribe patterns require clients to implement a Web Service to receive event publications. The Faux Implementation pattern shows that as long as the behavior fulfills the contract, there is no reason you have to implement a service with traditional mechanisms
Service Factory: Class factories are common in Java programming. A class factory gives a mechanism to bind to a class implementation at runtime rather than compile time. The same capability is possible with service implementations. For example, there is no reason that a company must use a single package shipper for all shipments. Instead, the service factory illustrates how your application can determine what service to use at runtime.
Data Transfer Object: The Data Transfer Object pattern originated with Java 2 Enterprise Edition (J2EE) patterns. When you move from a single process application to a distributed application, calls between participants in the distributed architecture become more expensive in terms of performance. By giving clients mechanisms to get groups of commonly accessed data in single operations, you can streamline clients and lower the number of accesses necessary to your Web Service.
Partial Population: The Data Transfer Object pattern passes fully populated data structures between programs. This is a great paradigm but creates a proliferation of data structures and relegates the service implementation to determining what the most likely groups of accessed data will be. Partial population takes a different approach to data transfers; it allows clients to tell the server what parts of a data structure to populate. In this way, you can lighten the burden on the communication mechanism as well as the query in the server object. This technique is especially useful for services that contain complex, nested data structures (not something you will typically find in a Web Service environment).
Suggested Reading

Patterns for Service Oriented Architecture

Web Services Integration Patterns-Part 1

Web Services Integration Patterns-Part 2


Enterprise Integration Patterns


Suggested Video Tutorial

Developing SOA applications

Real World Web Services

Web Services Middleware


Web Services Overview


Thats All for now !!

Tuesday, February 5, 2008

Performance Management of Java Applications


Today I would like discuss on Performance Management of Java applications and handling memory leaks in Java. Having spend some unnerving moments and time myself ,related to handling memory leaks (OutOfMemoryError) issues in one the enterprise application being already in production in a clustered environment recently,I was faced with having long meetings and discussions with the customer and the people in the higher hierarchy level at the company ,to fine tune their application which was the source of generating more revenue and profits to them and as a Senior Consultant , I was looked upon as source of providing guidance and technical tips to achieve that.And thats when I realized that not always the old paradigm that while using Java we need not worry about allocating and freeing memory of objects holds true...And here Iam not giving you an overview on the steps or the code I followed & wrote,better you learn yourself,but rather an overview on the tools you can use to manage and fine tune Java applications. To start with let us understand the term Garbage Collection in Java and in the process of submitting the proof of concept to the customer,the tools that can be used for handling such scenarios ,as part of my R&D I came across the tools mentioned below which will certainly help Java developers in fine tuning their applications and my personal favorite is YourKit Java Profiler.

Garbage Collection in Java
In Java ,you create objects, and JVM takes care of removing them when they are no longer needed by the application through a mechanism known as garbage collection.The job of the garbage collector is to find objects that are no longer needed by an application and to remove them when they can no longer be accessed or referenced. The garbage collector starts at the root nodes, classes that persist throughout the life of a Java application, and sweeps though all of the nodes that are referenced. As it traverses the nodes, it keeps track of which objects are actively being referenced. Any classes that are no longer being referenced are then eligible to be garbage collected. The memory resources used by these objects can be returned to the Java virtual machine (JVM) when the objects are deleted...And how best can you do that? Below is the brief synopsis of the tools you can use in Java for performance tuning.

Tools you can use for handling memory leaks and performance management

VisualGC-
Visual Garbage Collection Monitoring tool
The visualgc tool attaches to an instrumented HotSpot JVM and collects and graphically displays garbage collection, class loader, and HotSpot compiler performance data.

JSTAT

The jstat tool displays performance statistics for an instrumented HotSpot Java virtual machine (JVM). The target JVM is identified by its virtual machine identifier, or vmid option described below.

Note:This utility is unsupported and may not be available in future versions of the J2SE SDK. It is not currently available on Windows 98 and Windows ME platforms.

JMap
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.This utility is unsupported and may or may not be available in future versions of the J2SE SDK.

Hat

HAT is a program that analyzes a heap dump file for Java programs. This tool can help a developer to debug and analyze the objects in a running Java program. It is particularly useful when debugging unintentional object retention. Starting with Java SE 6, HAT has been replaced with jhat, which is included with the standard Sun distribution. HAT is not being maintained in a stand-alone configuration. The Heap Analysis Tool (HAT) helps to debug unnecessary object retention (sometimes called "memory leaks") by providing a convenient means to browse the object topology in a heap snapshot, which is generated by the Java VM. HAT reads a hprof file, then sets itself up as a web server--therefore allowing you to run queries against a heap dump contained within the hprof file.For further information, read this article

JProbe

JProbe is an enterprise-class Java profiler providing intelligent diagnostics on memory usage, performance and test coverage, allowing developers to quickly pinpoint and repair the root cause of application code performance and stability problems that obstruct component and integration integrity. With JProbe’s intuitive, unified UI, it’s easier to navigate and configure all JProbe analysis tools.JProbe also provides a powerful filtering mechanism for controlling the data display, including nine different metrics for sorting and coloring data for clutter-free, easier viewing.

Yourkit Java Profiler

The best part about Yourkit Java Profiler is , it does integrate seamlessly with different IDEs like Eclipse,Intellij etc and provides full support for Java 5 and Java 6.For the list of benefits and features,this great tool provides ,read this.

And thats all for now,say whatever you want to and any other tools that can be used for handling memory leaks in Java.

Rational Purify
IBM Rational Purify is a runtime analysis solution designed to help developers write more reliable code. Reliability is ensured via two crucial functions: memory corruption detection and memory leak detection.For further understanding about using Rational Purify , read this article

Suggested Video Tutorial

Maintaining Java Apps in Production Environment- by Alexandre Rafalovitch at InfoQ