Blog

Blog

Some time ago I did some CXF performance measurements. See How fast is CXF ? - Measuring CXF performance on http, https and jms.

For cxf 3.0.0 I did some massive changes on the JMS transport. So I thought it is a good time to compare cxf 2.x and 3 in JMS performance. My goal was to reach at least the original performance. As my test system is different now I am also measuring the cxf 2.x performance again to have a good comparison.

Test System

Dell Precision with Intel Core i7, 16 GB Ram, 256 GB SSD running ubuntu Linux 13.10.

Test Setup

I am using a new version of my performance-tests project on github.

The test runs on one machine using one activemq Server, one test server and one test client.

The test calls the example cxf CustomerService.

The following call types are supported:

Call type

Code

Description

oneway

customerService.updateCustomer(customer);

Asynchronous one way call. Sends one soap message to server

requestReply

List<Customer> customers = customerService.getCustomersByName("test2");

Synchronous request reply. Sends one soap message to server and waits for the reply

requestReplyAsync

Future<GetCustomersByNameResponse> resp = customerService.getCustomersByNameAsync("test2");
GetCustomersByNameResponse res1 = resp.get();

Sends an asynchronous request reply. Sends one soap message to server and returns without waiting.
In this test we wait directly after the call for simplicity.

The requests above are sent using an executor with a fixed number of threads.

For the test you can specify the total number of messages, the number of threads and the call type.
First the number of requests are sent for warmup and then for the real measured test.
To run the test with cxf 3.0.0-SNAPSHOT you have to compile cxf from source.

Test execution

1. Run a standalone activemq 5.9.0 server with the activemq.xml from the github sources above.

2. Start the jms server in a new console from the project source using:

3. Start the jms client using:

Test results

The test is executed with several combinations of the parameters. Using the pom property cxf.version we also switch between cxf 2.7.10 and cxf 3.0.0-SNAPSHOT.

CXF 2.7.10

threads
call type

1

20

40

oneway

10541

12143

11737

requestReply

610

661

691

requestReplyAsync

1561

3448

3859

CXF 3.0.0-SNAPSHOT

threads
call type

1

20

40

oneway

11170

11632

12010

requestReply

1524

3248

3671

requestReplyAsync

1590

3569

3909

Observations

The first interesting fact here is that one way messaging does not profit from the number of threads. One thread already seems to achieve the same performance like 40 threads. This is quit intuitive as activemq needs to synchronize the calls on the one thread holding the jms connection. On the other hand using more processes also does not seem to improve the performance so we seem to be quite at the limit of activemq here which is good.

For request reply the performance seems to scale with the number threads. This can be explained as we have to wait for the response and can use this time to send some more requests.

One really astonishing thing here is that CXF 2.7.10 seems to be really bad when using synchronous request reply. This is because it uses consumer.receive in this case while it uses a jms message listener for async calls. So the jms message listener seems to perform much better than the consumer.receive case. For CXF 2.7.10 this means we can speed up our calls if we use the asynchronous interface even if it is more inconvenient.

The most important observation here is that CXF 3 performs a lot better for the synchronous request reply case. It is as fast as for the asynchronous case. The reason is that we now also use a message listener for synchronous calls as long as our correlation id is based on the conduit id prefix. This is the default so this case is vastly improved. CXF 3 is up to 5 times faster than CXF 2.7.10.

There is one down side still. If you use message id as correlation id or a user correlation id set on the cxf message then cxf 3 will switch back to consumer.receive and will be as slow as CXF 2 again.

Apache karaf is an open source OSGi server developed by the Apache foundation. It provides very convenient management functionality on top of existing OSGi frameworks. Karaf is used in several open source and commercial solutions.

Like often convenience and security do not not go well together. In the case of karaf there is one known security hole in default installations that was introduced to make the initial experience with karaf very convenient. Karaf by default starts an ssh server. It also delivers a bin/client command that is mainly meant to connect to the local karaf server without a password.

Is your karaf server vulnerable?

Some simple steps to check if your karaf installations is open.

  • Check the "etc/org.apache.karaf.shell.cfg" for the attribute sshPort. Note this port number. By default it is 8101
  • Do "ssh -p 8101 karaf@localhost". Like expected it will ask for a password. This may also be dangerous if you do not change the default password but is quite obvious.
  • Now just do bin/client -a 8101. You will get a shell without supplying a password. If this works then your server is vulnerable

How does it work

The client command has a built in ssh private key which is used when connecting to karaf. There is a config "etc/keys.properties" in karaf which defines the public keys that are allowed to connect to karaf.

Why is this dangerous?

The private key inside the client command is fixed and publicly available. See karaf.key. As the mechanism also works with remote connections "bin/client -a 8101 -h hostname" this means that anyone with access to your server ip can remotely control your karaf server. As the karaf shell also allows to execute external programs (exec command) this even allows further access to your machine.

How to secure your server ?

Simply remove the public key of the karaf user in the "etc/keys.properties". Unfortunately this will stop the bin/client command from working.

Also make sure you change the password of the karaf user in "etc/users.properties".

Nicely timed as a christmas present Apache Karaf 3.0.0 was released on the 24th of December. As a user of karaf 2.x you might ask yourself why to switch to the new major version. Here are 10 reasons why the switch is worth the effort.

External dependencies are cached locally now

One of the coolest features of karaf is that it can load features and bundles from a maven repository. In karaf 2.x the drawback was though that external dependencies thaat are not already in the system dir and local maven repo were always loaded from the external repo. Karaf 3 now uses the real maven artifact resolution. So it automatically caches downloaded artifacts in the local maven repo. So the artifacts only have to be loaded the first time.

Delay shell start till all bundles are up and running

A typical problem in karaf 2.x and also karaf 3 with default settings is that the shell comes up before all bundles are started. So if you enter a command you migh get an error that the command is unknown - simply because the respective bundle is not yet loaded. In karaf 3 you can set the property "karaf.delay.console=true". Karaf will then show a progress bar on startup and start the console when all bundles are up and running. If you are in a hurry you can still type enter to start the shell faster.

Create kar archives from existing features

If you need some features for offline deploymeent then kar files are a nice alternative to setting up a maven repo or copying everything to the system dir. Most features are not available as kar files though. In karaf 3 the kar:create command allows to create a kar file from any installed feature repository. Kar files now also can be defined as pure repositories. So they can be installed without installing all contained features.

Example:

feature:repo-add camel 2.12.2
kar:create camel-2.12.2

A kar file with all camel features will be created below data/kar. You can also select which features to include.

More consistent commands

In karaf 2.x the command naming was not very consistent. For karaf 3 we have the common scheme of <subject>:<command> or <subject>:<secondary-subject>-<command>. For example adding feature repos now is:

feature:repo-add <url or short name> ?<version>

Instead of features:chooseurl and features:addurl.

The various dev commands are now moved to the subjects they affect. Like bundle:watch instead of dev:watch or system:property instead of dev:system-property.

JDBC commands

Karaf 3 allows to directly interact with jdbc databases from the shell. Examples are creating a datasource. Executing a sql command, showing the results of a sql query. For more details see blog article from JB: New enterprise JDBC feature.

JMS commands

Similar to jdbc karaf 3 now contains commands for jms interactions from the shell. You can create connection factories, send and consume messages. See blog article from JB : new enterprise jms feature.

Role based access control for commands and services

In karaf 2.x every user with shell access can use every command, OSGi services are not protected at all. Karaf 3 contains role based access control for commands and services. So for example you can define a group of users that can only list bundles and do other non admin task by simply changing some configuration files. Similar you can protect any OSGi service so it can only be called from a process with successful jaas login and with the correct roles. More details about this feature can be found at http://coderthoughts.blogspot.de/2013/10/role-based-access-control-for-karaf.html.

Diagnostics for blueprint and spring dm

In karaf 2.x it was difficult to diagnose problems with bundles using blueprint and spring dm. Karaf 3 now has the simple bundle:diag command that lists diagnostics about all bundles that did not start. For example you can see that a blueprint bundle waits for a namespace or that a blueprint file has a syntax error. Simply try this the next time your bundles do not work like expected.

Features for persistence frameworks

Karaf 3 now has features for openjpa and hibernate. So along with the already present jpa and jta features this makes it easy to install everything you need to do jpa based persistence.

Features for CDI and EJB

The cdi feature installs pax cdi. This allows to use the full set of CDI annotations including any portable extensions in Apache Karaf. The openjpa feature allows to even install openejb for full ejb support on Apache Karaf.

This only lists some of the most noteable features of karaf 3. There is a lot more to discover. Take your time and dig around the features and commands.

In this talk from WJAX 2013 I show best practices for OSGi development in a practical example based around an online voting application.

The UI allows to vote on a topic and shows the existing votes in a diagram. It is done in Javascript and HTML using jQuery and google graph. Additionally votes can be sent using twitter, irc and karaf commands. The image below shows how to vote for the topic camel using your twitter status. 

The architecture of the example follows the typical separation of model, service layer and front end.

Full Size

In the talk I explain the difficulties people typically face with OSGi and how to solve them using karaf, maven bundle plugin and blueprint.

By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

Example on github

Introducing the example

Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

Full Size

As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

Installing the service

To keep things simple we will install container A and B on the same system.

  • Download Apache Karaf 2.2.10
  • Unpack into folder container_a
  • Copy etc/jre.properties.cxf into etc/jre.properties
  • Start bin/karaf
  • config:propset -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • config:propset -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
  • features:chooseurl cxf-dosgi 1.4.0
  • features:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
  • features:addurl mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
  • features:install example-tasklist-persistence

After these commands the tasklist persistence service should be running and be published on zookeeper.

You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

Installing the UI

  • Unpack into folder container_b
  • Copy etc/jre.properties.cxf into etc/jre.properties
  • Start bin/karaf
  • config:propset -p org.ops4j.pax.web org.osgi.service.http.port 8182
  • config:propset -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • features:chooseurl cxf-dosgi 1.4.0
  • features:install cxf-dosgi-discovery-distributed
  • features:addurl mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
  • features:install example-tasklist-ui

The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

How does it work

Full Size

The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

On thursday I had a talk about Apache Camel at W-JAX in Munich. Like on the last conferences there was a lot of interest in Camel and the room was really full. You can find the slides "Integration ganz einfach mit Apache Camel" here and the sources for the examples on github.

On Friday I joined the Eclipse 4 RCP workshop from Kai Tödter. Learned a lot about the new Eclipse. At last Eclipse RCP programming is becoming easier.

I just did my ApacheCon talk about OSGi best practices.It was the last slot but the room was still almost full. In general the OSGi track had a lot of listeners and there were a lot of talks that involved Apache Karaf. So I think that is a nice sign for greater adoption of OSGi and Karaf.

You can find the Slides at google docs.

The demo application can be found inside my Karaf tutorial code at github.

Practical Camel example that polls from a database table and sends the contents as XML to a jms queue. The route uses a JTA transaction to synchronize the DB and JMS transactions. An error case shows how you can handle problems.

Route and Overview

jpa_route

The route starts with a jpa endpoint. It is configured with the fully qualified name of a JPA @Entity. From this entity camel knows the table to poll and how to read and remove the row. The jpa endpoint polls the table and creates a Person object for each row it finds. Then it calls the next step in the route with the Person object as body. The jpa component also needs to be set up separately as it needs an EntityManagerFactory.

The onException clause makes the route do up to 3 retries with backoff time increasing by factor 2 each time. If it still fails the message is passed to a file in the error directory.

The next step transacted() marks the route as transactional it requires that a TransactedPolicy is setup in the camel context. It then makes sure all steps in the route have the chance to participate in a transaction. So if an error occurs all actions can be rolled back. In case of success all can be committed together.

The marshal(df) step converts the Person object to xml using JAXB. It references a dataformat df that sets up the JAXBContext. For brevity this setup is not shown here.

The ExceptionDecidet bean allows to trigger an exception if the name of the person is error. So this allows us to test the error handling later.

The last step to("jms:person") sends the xml representation of person to a jms queue. It requires that a JmsComponent named jms is setup in the camel context.

This second route simply listens on the person queue, reads and displays the content. In a production system this part would tpyically be in another module.

Person as JPA Entity JAXB class

The Person class acts as a JPA Entity and as a JAXB annotated class. This allows us to use it in the camel-jpa component as well as during the marshalling. Keep in mind though that this
would rather be a bad practice in production as it would tie the DB model and the format of the JMS message together. So for real integrations it would be better to have separate beans for JPA and JAXB and do a manual
conversion between them.

DataSource and ConnectionFactory setup

We use an XADataSource for Derby (See https://github.com/cschneider/Karaf-Tutorial/blob/master/db/datasource/datasource-derby.xml). As the default ConnectionDactory provided by ActiveMQ in Karaf is not XA ready we define the broker and ConnectionFactory definition by hand (See https://github.com/cschneider/Karaf-Tutorial/blob/master/cameljpa/jpa2jms/localhost-broker.xml). Together with the Karaf transaction feature these provide the basis to have JTA transactions.

JPAComponent, JMSComponent and transaction setup

An important part of this example is to use the jpa and jms components in a JTA transaction. This allows to roll back both in case of an error.
Below is the blueprint context we use. We setup the JMS component with a ConnectionFactory referenced as an OSGi service.
The JPAComponent is setup with an EntityManagerFactory using the jpa:unit config from Aries JPA. See Apache Karaf Tutorial Part 6 - Database Access for how this works.
The TransactionManager proviced by Aries transaction is referenced as an OSGi service, wrapped as a spring PlattformTransactionManager and injected into the JmsComponent and JPAComponent.

Running the Example

You can find the full example on github : JPA2JMS Example
Follow the Readme.txt to install the necessary Karaf features, bundles and configs.

Apart from fthis example we also install the dbexamplejpa. This allows us to use the person:add command defined there to populate the database table.
Open the Karaf console and type:

You should then see the following line in the log:

So what happened

We used the person:add command to add a row to the person table. Our route picks up this record, reads and converts it to a Person object. Then it marshals it into xml and sends to the jms queue person.
Our second route then picks up the jms message and shows the xml in the log.

Error handling

The route in the example contains a small bean that reacts on the name of the person object and throws an exception if the name is "error".
It also contains some error handling so in case of an exception the xml is forwarded to an error directory.

So you can type the following in the Karaf:

This time the log should not show the xml. Instead it should appear as a file in the error directory below your karaf installation.

Summary

In this tutorial the main things we learned are how to use the camel-jpa component to write as well as to poll from a database and how to setup and use jta transactions to achieve solid error handling.

Back to Karaf Tutorials

Yesterday evening I did a talk about Apache Karaf and OSGi best practice together with Achim Nierbeck. Achim did the first part about OSGi basics and Apache Karaf and I did the second part about OSGi best practices.

One slide from the presentation about Karaf shows the big number of features that can be installed easily. So while the Karaf download is just about 8 MB you can install additional features transparently using maven that make it a full blown integration or enterprise application server.

OSGi best practices

In my part I showed how blueprint, OSGi Services and the config admin service can be used together to build a small example application consisting of the typical modules model, persistence and UI like shown below.

Except for the UI the example was from my first Karaf tutorial. While in the tutorial I used a simple Servlet UI that merely is able to display the Task objects I wanted to show a fancier UI for this talk. Since I met the makers of Vaadin on the last W-JAX conferences I got interested in this simple but powerful framework. So I gave it as spin. I had only about two days to prepare for the talk so I was not sure if I would be able to create a good UI with it. Fortunately it was really easy to use and it just took me about a day to learn th basics and build a full CRUD UI for my Task example complete with data binding to the persistence service.

One additional challenge was to use vaadin in OSGi. The good thing is that it is already a bundle. So a WAB (Web application bundle) deployment of my UI would have worked. I wanted it to be pure OSGi though so I searched a bit a found the vaadinbrige from Neil Bartlet. It allows to simply create a vaadin Application and factory class in a normal bundle and publish it as a service. The bridge will then pick it up and publish it to the HttpService.

The end result looks like this:

So you have a table with the current tasks (or to do items). You can add and delete tasks with the menu bar. When you select a task you can edit it in the form below. Any changes are directly sent to the service and updated in the UI.
The nice thing about vaadin is that it handles the complete client server communication and databinding for you. So this whole UI takes only about 120 lines of code. See ExampleApplication on github.

So the general idea of my part of the talk was to show how easy it is to create nice looking and architecturally sound applications using OSGi and Karaf. Many people still think OSGi will make your live harder for normal applications. I hope I could show that when using the right practices and tools OSGi can even be simpler and more fun than Servlet Container or Java EE focused development.

I plan to add a little more extensive Tutorial about using Vaadin on OSGi to my Karaf Tutorial series soon so stay tuned.

Presentation: ApacheKaraf.pdf

Source Code:

Vaadin UI: https://github.com/cschneider/Karaf-Tutorial/tree/master/vaadin

Tasklist Model and Persistence: https://github.com/cschneider/Karaf-Tutorial/tree/master/tasklist

Achim adapted another Vaadin OSGi example from Kai Tödter to Maven and Karaf: https://github.com/ANierbeck/osgi-vaadin-demo

Nach dem Talk auf der letzten W-JAX hatte ich nun die Gelegenheit, auch auf der JAX über Apache Camel zu sprechen. Diesmal hatte ich einen grösseren Raum zur Verfügung, der mit fast 200 Zuhörern auch gut gefüllt war. Dies zeigt das grosse Interesse an Apache Camel. Die Präsentation ist direkt im Anhang verfügbar. Diesmal ging ich stärker auf OSGi und Apache Karaf als Ablaufumgebung ein. Ich hatte auch nur 20 Folien und verwendete einen größeren Teil der Zeit für Live Demos. Der Vortrag wurde auch gefilmt und sollte bald auf der JAX Website verfügbar sein. Ich werde dann eine Aktualisierung mit Link posten.

Nach dem geplanten Ende des Vortrags war ein freier Zeitblock. Viele der Zuhörer blieben noch um Fragen zu stellen und ich zeigte auch noch einige tiefer gehende Beispiele zu Bean Integration und Pojo Messaging. Als Resumé kann ich sagen, dass Apache Camel sehr beliebt ist und besonders Entwickler und Architekten den Einsatz treiben während das Management noch oft auf große Kommerzielle Frameworks setzt. Apache Karaf wird als sehr interesante Deploymentumgebung wahrgenommen. In den meisten Fällen gibt es allerdings Schwierigkeiten mit Operations bei der produktiven Einführung, da Apache Karaf und OSGi noch recht wenig verbreitet sind und damit eine zusätzliche Serverlandschaft darstellen.

Präsentation: Apache Camel JAX12.pdf

Beispiele: https://github.com/cschneider/camel-webinar/tree/master/part1

Today the 2.2.6 Version of Karaf was released. It incorporates more than 80 fixed jira issues.

One important usability improvement is the features:chooseurl command. It allows to add feature files for well known products like Apache Camel or Apache CXF in a much simpler way than the features:addurl command.

For example to install the current Apache Camel you just have to type:

Besides camel we currently already support activemq, cxf, jclouds, openejb and wicket.

In fact the command simply uses a config file etc/org.apache.karaf.features.repos.cfg that maps a product name to the feature file url and sets it with the given version number. So if some product you would like to use is missing you can simply add it yourself. If it is interesting for others too then please create a jira issue so we add it in the next distro.

Currently the chooseurl command already has completion for the product name but not for the version number. We plan to add completion for the version number in one of the next Karaf releases by evaluating the version in the maven repos.

Btw. for camel and cxf you still have to replace the etc/jre.properties with the etc/jre.properties.cxf file to change some package exports of the system bundle.

This wednesday on the 4th of April I will give a talk about the open source integration framework Apache Camel at the Java User Group in Karlsruhe. I will start with an overview of Camel and give some insight in the Camel Architecture. The main part of the Talk will be live coding showing how easy integration can be with the Camel DSL.

See the webpage of the JUG Karlsruhe for some more details: http://jug-karlsruhe.mixxt.de/networks/events/show_event.55045

Camel has many options for deployment. If I have the freedom of choice I prefer to run Camel on Karaf but the typical case at customers is that they have a certain app server and we have to fit in. In this case the platform was JBoss 5.1. Before Camel 2.8 this was quite complicated as camel tried to scan for typeconverters on the classpath and that part failed because of the JBoss class loader. I used camel 2.8.4 and so this was no issue except for a little problem I will come back to later.

Packaging Camel integrations as a war and using camel-servlet.

The most suitable deployment option on JBoss is to package your integration in a war archive and install it e.g. using the deploy folder. The camel-example-servlet-tomcat is the best starting point for that kind of project. It shows how to create wars using maven and also shows how to start camel from a spring application context in a servlet environment. If you know the older camel servlet examples you will notice that the way camel is started has changed recently. In older versions you had to configure the spring context xml in the camel servlet which is a rather uncommon way to start spring. The current example now shows how to start camel with the default spring context loader listener which is the much better solution.

See https://svn.apache.org/repos/asf/camel/tags/camel-2.8.4/examples/camel-example-servlet-tomcat/src/main/webapp/WEB-INF/web.xml

Installing and using the ActiveMQ connection factory using jndi

The easiest way to install a connection factory for a camel integration is to define the connection factory in your spring context. This has some drawbacks though. One is that you then depend directly on ActiveMQ and can not simply replace it by another broker. The other is that
the developer has access to the password of the connection factory. Of course you can use a property file to extract this still it is not ideal. So the prefered way to find a connection factory in an app server environment is to look it up in jndi. In the spring context this is quite simple using the jee namespace:

A bigger problem is how to install the connection factory in JBoss so it is available in jndi. There are two ways to achieve this. One is to install it as a JEE resource adapter (see http://activemq.apache.org/jboss-integration.html). I don´t like this solution too much as it is quite complicated and requires the special activemq-ra.rar.

After a lot of searching on the net I found a nice solution.

This JBoss mbean description initializes an ActiveMQXAConnectionFactory and installs it in the jndi context. It obviously needs the activemq-core-5.5.0.jar which I simply installed in the lib dir of the default server. The problem with this is of course that it is then available on the classpath of all wars you also install. So if some activemq specialist knows a better solution to have it only on the classpath for the mbean config I would really like to know how to do this.

When I experimented with this setup I first used the activemq-all-5.5.0.jar. The problem was that it contains an older camel version. So when I installed my camel war it was not able to start
because of problems loading typeconverters from this jar. So remember to only use the core jar.

Example project

To make it easy for you to test this out yourself I have put the code of a simple producer and consumer project on github.
To install do the following steps:

  • download activemq-5.5.0, extract and start
  • download jboss 5.1 and extract it
  • download the example projects and build them using mvn install
  • copy the jms-jboss-beans.xml to the default/deploy folder
  • copy activemq-5.5.0.jar to the default/lib folder
  • checkout or download https://github.com/cschneider/cameljbossha
  • Build the example project with mvn clean install
  • copy the war files /consumer/target/consumer-1.0.0.war and /producer/target/producer-1.0.0.war to the default/deploy folder
  • start jboss

The producer offers a servlet where we can trigger a message. So open a browser and go to http://localhost:8080/producer-1.0.0/camel/tojms . The jboss log will show that the request is handled and a jms message is sent. The consumer will pick up the message and write a log entry "Message received from jms".

Summary

We have seen how to produce war projects with Apache Camel and how the usage of the camel servlet has changed recently. This part is independent from JBoss. Next we have seen that current Camel versions allow to deploy on JBoss without any special tweaks. We learned how to reference a connection factory in jndi and how to install it in JBoss.

The projects producer and consumer will be reused in my next post where I look into concepts for high availability with Camel and ActiveMQ.

CXF 2.6.0 will bring a lot of improvements for deployment in OSGi. Till now cxf was bundled in one OSGi bundle. Either with all features or with a minimal feature set. Thanks to Dan Kulp cxf is now delivered as individual bundles. So it can be installed with only the needed features. Besides the smaller size in many use cases this also means that we have less optional dependencies which make installation difficult. Each bundle defines the imports it really needs. This makes it much easier to get the dependencies right. Of course the Karaf feature file will still be provided to make it easy to install CXF in Apache Karaf.

Based on the work of Dan I recently started to optimize the imports of the typical bundles most people will use from cxf. At the start we had many dependencies like spring, velocity, neethi, .., that I felt should not be needed and make cxf quite big. By refactoring some of the modules I was able to slim these down to the bare minimum. The current code on trunk already reflects these changes.

If you want to try this yourself you can easily install the snapshot of cxf in karaf 2.2.5. As the feature file is not yet changed I uploaded a gist of the commands you need to execute. Remember to also use the jre.properties.cxf for karaf to disable some default java apis so CXF can replace them with newer versions.

So after this install the karaf list -u command shows the following bundles:

This installation of CXF is ready for SOAP/HTTP and REST with JAX-WS and JAXB on the java side which reflects what most people will need.

To test the features I recommend to install the example from my Karaf Tutorial about CXF.

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.

Installing the driver

The first step is to install the driver jar(s) for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo. So this is tpyically only
one Karaf command. If the driver is available in maven but no bundle we can most times use the wrap: protocol of Karaf to make it a bundle on the fly. If the driver is not even
in the repo we have to install the file into the maven repo first.

For derby the following command will work

See the db/datasource folder on github for installation instructions for (db2, derby, h2, mysql and oracle).

Installing the datasource

In Java EE servers and servlet containters you typically use JNDI to install a DataSource on the server level so the application can just refer to it and so does not have to know the specific driver or database url. In OSGi JNDI is replaced by OSGi services. So the best way to decouple your application from the database is to offer a DataSource as an OSGi service.

As we can deploy simple blueprint xml files in Karaf this is really easy. We define a bean with the class of the specific datasource impl and configure it. Then we publish that bean as an OSGi service with the interface a javax.sql.DataSource. This works because Karaf uses dynamic imports when it deploys blueprint context files so all classes are available.

For each database flalour you can find a suitable blueprint.xml in db/datasource.

Browsing the database using the Karaf db:* commands

As part of this tutorial I created some simple Karaf commands to work with databases from the Karaf shell. These commands proved to be quite handy so I will try to move them to the Karaf project.

db:select <name>

When called without parameters the command shows all available DataSources.
Example:

When called with the name of a DataSource it will select the DataSource:

Example:

db:exec "<sql>"

Executes a SQL command.
Example:

This creates a table person and adds a row to it.

db:tables

Shows the current tables in the database.

Example:

db:query

Executes a query and shows the results.

Example:

Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

In Karaf we just need our own bundle as we have no special dependencies

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects it is quite typical that JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC. You need to maintain less SQL code and JPA provides dialects for the subtle differences in databases that else you would have to code yourself. For this example we use OpenJPA as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService that manages Person objects.
Person is just a java bean annotated with JPA @Entitiy. As OpenJPA needs to enhance the bytecode of the classes we need to add the openjpa-maven-plugin to the pom.xml which prepares the classes for JPA.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

persistence.xml

Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using "osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/derbyds)". This makes a lookup for an OSGi service with the given interface and properties.

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

blueprint.xml

We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects an EnityManager into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction
before the method and commits on success or rollbacks on any exception thrown.

Build and Install

The project builds with mvn clean install like usual. A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for openjpa, aries (jpa, transaction, proxy and jndi) and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test

This should create a person object with the above data and persist it to the database. Unfortunately this currently does not work. I guess I still have an error somewhere. So instead we use the db commands to populate the DB by manually:

Then we list the persisted persons

Using pooling for datasources

I any real world scenario you will need pooling for the DataSource. To achieve this you have two good options:

1. Use a pooling datasource from the vendor:

DB

Class

Derby

org.apache.derby.jdbc.EmbeddedConnectionPoolDataSource

MySQL

com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource

Oracle

oracle.jdbc.pool.OracleConnectionPoolDataSource

2. Use the PoolingDataSource from dbcp like described in this gist by Andreas Pieber: https://gist.github.com/2761628

Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the db:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc. This is really easy but a bit verbose. You might want to try the spring JdbcTemplate to remove all the cleanup code. Last but not least we also used jpa to access our database.

In theory JPA and OSGi work together really well. Keep in mind though that JPA support for OSGi is still quite fresh. It took me quite a while to get it all running. The documentation is quite sparse and I still have not been able to fix the persist issue. I will update the code and blog entry as soon as I have the jpa persist working.

Back to Karaf Tutorials