This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

You can find the full source code on github Karaf-Tutorial/tasklist-ds

Declarative Services

Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

For more details see

DS vs Blueprint

Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

  1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
  2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
    You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
    In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
  3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
  4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
  5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.


The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
to manipulate JPA Entities. It looks like this:


In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
 as it is private and there is not setter.

Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
interceptors that handle the transaction / em management.

DS and JPA

In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

So this requires a little more code but the advantage is that there is no need for a special framework integration.
The code can also be tested much easier. See TaskServiceImplTest in the example.


  • features
  • model
  • persistence
  • ui


Defines the karaf features to install the example as well as all necessary dependencies.


This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.



We define that we need an OSGi service with interface TaskService and a property "" with the value "tasklist".


The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
@Reference TaskService taskService injects the TaskService into the field taskService.
@Activate makes sure that addDemoTasks() is called after injection of this component.

Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
be used as plain java code without any special tricks.


The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.


The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
So it is available on the url http://localhost:8181/tasklist.


Make sure you use JDK 8 and run:


Make sure you use JDK 8.
Download and extract Karaf 4.0.0.
Start karaf and execute the commands below

Validate Installation

First we check that the JpaTemplate service is present for our persistence unit.

Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

A likely problem would be that the DataSource is missing so lets also check it:

This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "". So the resulting DataSource should be pooled and fully ready for XA transactions.

Next we check that the DS components started:

If any of the components is not active you can inspect it in detail like this:


Open the url below in your browser.

You should see a list of one task

 http://localhost:8181/tasklist?add&taskId=2&title=Another Task


You may already know the CXF LoggingFeature. You used it like this:

Old CXF LoggingFeature

It allowed to add logging to a CXF endpoint at compile time.

While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

Logging feature in CXF 3.1.0

In CXF 3.1 this code was moved into a separate module and gathered some new features.

  • Auto logging for existing CXF endpoints
  • Uses slf4j MDC to log meta data separately
  • Adds meta data for Rest calls
  • Adds MD5 message id and exchange id for correlation
  • Simple interface for writing your own appenders
  • Karaf decanter support to log into elastic search

Auto logging for existing CXF endpoints in Apache Karaf

Simply install and enable the new logging feature:

Logging feature in karaf

Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

A log entry looks like this:

Sample Log entry

This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

Slf4j MDC values for meta data

This is the raw logging information you get for a SOAP call:

MDC.content-typetext/xml; charset=UTF-8
MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}
message<soap:Envelope xmlns:soap=""><soap:Body><ns2:getAll xmlns:ns2=""; xmlns:ns3=""/></soap:Body></soap:Envelope>;

Some things to note:

  • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
  • A lot of the details are in the MDC values

You need to change your pax logging config to make these visible.

You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

Message id and exhange id

The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

Simple interface to write your own appenders

Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

So for example you could write your logs to one file per message or to JMS.

Karaf decanter support to write into elastic search

Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

It works like this:

  • CXF sends the messages as slf4j events which are processed by pax logging
  • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
  • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

Installing Decanter for CXF Logging

After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

Then you should see your cxf messages like this:

Kibana easily allows to filter for specific services and correlate requests and responses.

This is just a preview of decanter. I will do a more detailed post when the first release is out.


Shows how to create a small application with a model, persistence layer and UI just with CDI annotations running as blueprint.


Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings. So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do. The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint


  • features
  • model
  • persistence
  • ui

Creating the bundles

The bundles are created using the maven bundle plugin. The plugin is only used in the parent project and uses <_include>osgi.bnd</_include> to extract the OSGi configs into a separate file. So each bundle project just needs a osgi.bnd file which is empty by default and can contain additional configs.

As bnd figures out most settings automatically the osgi.bnd file are typically very small.


Defines the karaf features to install the example as well as all necessary dependencies.


The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity
TaskService (CRUD operations for Tasks)

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.


The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.


TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.


The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

Servlet UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.


The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Abgular JS / Bootstrap UI

The Angular UI is just a bundle that statically exposes a html and js resource. It uses Angular and Bootstrap to create a nice looking and functional interface. The Tasks are read and manipulated using the Tasklist REST service.As the code completely runs on the client there is not much to talk about it from the blueprint point of view.

The example uses $http to do the rest requests. This is because I am not yet familiar enough with the $resource variant which would better suit the rest concepts.

From the OSGi point of view the Angular UI bundle simply sets the Header Web-ContextPath: /tasklist and provides the html and js in the src/main/resources folder.


mvn clean install

Installation and test

See Readme.txt on github.


Some time ago I did some CXF performance measurements. See How fast is CXF ? - Measuring CXF performance on http, https and jms.

For cxf 3.0.0 I did some massive changes on the JMS transport. So I thought it is a good time to compare cxf 2.x and 3 in JMS performance. My goal was to reach at least the original performance. As my test system is different now I am also measuring the cxf 2.x performance again to have a good comparison.

Test System

Dell Precision with Intel Core i7, 16 GB Ram, 256 GB SSD running ubuntu Linux 13.10.

Test Setup

I am using a new version of my performance-tests project on github.

The test runs on one machine using one activemq Server, one test server and one test client.

The test calls the example cxf CustomerService.

The following call types are supported:

Call type





Asynchronous one way call. Sends one soap message to server


List<Customer> customers = customerService.getCustomersByName("test2");

Synchronous request reply. Sends one soap message to server and waits for the reply


Future<GetCustomersByNameResponse> resp = customerService.getCustomersByNameAsync("test2");
GetCustomersByNameResponse res1 = resp.get();

Sends an asynchronous request reply. Sends one soap message to server and returns without waiting.
In this test we wait directly after the call for simplicity.

The requests above are sent using an executor with a fixed number of threads.

For the test you can specify the total number of messages, the number of threads and the call type.
First the number of requests are sent for warmup and then for the real measured test.
To run the test with cxf 3.0.0-SNAPSHOT you have to compile cxf from source.

Test execution

1. Run a standalone activemq 5.9.0 server with the activemq.xml from the github sources above.

2. Start the jms server in a new console from the project source using:

3. Start the jms client using:

Test results

The test is executed with several combinations of the parameters. Using the pom property cxf.version we also switch between cxf 2.7.10 and cxf 3.0.0-SNAPSHOT.

CXF 2.7.10

call type

















call type

















The first interesting fact here is that one way messaging does not profit from the number of threads. One thread already seems to achieve the same performance like 40 threads. This is quit intuitive as activemq needs to synchronize the calls on the one thread holding the jms connection. On the other hand using more processes also does not seem to improve the performance so we seem to be quite at the limit of activemq here which is good.

For request reply the performance seems to scale with the number threads. This can be explained as we have to wait for the response and can use this time to send some more requests.

One really astonishing thing here is that CXF 2.7.10 seems to be really bad when using synchronous request reply. This is because it uses consumer.receive in this case while it uses a jms message listener for async calls. So the jms message listener seems to perform much better than the consumer.receive case. For CXF 2.7.10 this means we can speed up our calls if we use the asynchronous interface even if it is more inconvenient.

The most important observation here is that CXF 3 performs a lot better for the synchronous request reply case. It is as fast as for the asynchronous case. The reason is that we now also use a message listener for synchronous calls as long as our correlation id is based on the conduit id prefix. This is the default so this case is vastly improved. CXF 3 is up to 5 times faster than CXF 2.7.10.

There is one down side still. If you use message id as correlation id or a user correlation id set on the cxf message then cxf 3 will switch back to consumer.receive and will be as slow as CXF 2 again.

Apache karaf is an open source OSGi server developed by the Apache foundation. It provides very convenient management functionality on top of existing OSGi frameworks. Karaf is used in several open source and commercial solutions.

Like often convenience and security do not not go well together. In the case of karaf there is one known security hole in default installations that was introduced to make the initial experience with karaf very convenient. Karaf by default starts an ssh server. It also delivers a bin/client command that is mainly meant to connect to the local karaf server without a password.

Is your karaf server vulnerable?

Some simple steps to check if your karaf installations is open.

  • Check the "etc/" for the attribute sshPort. Note this port number. By default it is 8101
  • Do "ssh -p 8101 karaf@localhost". Like expected it will ask for a password. This may also be dangerous if you do not change the default password but is quite obvious.
  • Now just do bin/client -a 8101. You will get a shell without supplying a password. If this works then your server is vulnerable

How does it work

The client command has a built in ssh private key which is used when connecting to karaf. There is a config "etc/" in karaf which defines the public keys that are allowed to connect to karaf.

Why is this dangerous?

The private key inside the client command is fixed and publicly available. See karaf.key. As the mechanism also works with remote connections "bin/client -a 8101 -h hostname" this means that anyone with access to your server ip can remotely control your karaf server. As the karaf shell also allows to execute external programs (exec command) this even allows further access to your machine.

How to secure your server ?

Simply remove the public key of the karaf user in the "etc/". Unfortunately this will stop the bin/client command from working.

Also make sure you change the password of the karaf user in "etc/".

Nicely timed as a christmas present Apache Karaf 3.0.0 was released on the 24th of December. As a user of karaf 2.x you might ask yourself why to switch to the new major version. Here are 10 reasons why the switch is worth the effort.

External dependencies are cached locally now

One of the coolest features of karaf is that it can load features and bundles from a maven repository. In karaf 2.x the drawback was though that external dependencies thaat are not already in the system dir and local maven repo were always loaded from the external repo. Karaf 3 now uses the real maven artifact resolution. So it automatically caches downloaded artifacts in the local maven repo. So the artifacts only have to be loaded the first time.

Delay shell start till all bundles are up and running

A typical problem in karaf 2.x and also karaf 3 with default settings is that the shell comes up before all bundles are started. So if you enter a command you migh get an error that the command is unknown - simply because the respective bundle is not yet loaded. In karaf 3 you can set the property "karaf.delay.console=true". Karaf will then show a progress bar on startup and start the console when all bundles are up and running. If you are in a hurry you can still type enter to start the shell faster.

Create kar archives from existing features

If you need some features for offline deploymeent then kar files are a nice alternative to setting up a maven repo or copying everything to the system dir. Most features are not available as kar files though. In karaf 3 the kar:create command allows to create a kar file from any installed feature repository. Kar files now also can be defined as pure repositories. So they can be installed without installing all contained features.


feature:repo-add camel 2.12.2
kar:create camel-2.12.2

A kar file with all camel features will be created below data/kar. You can also select which features to include.

More consistent commands

In karaf 2.x the command naming was not very consistent. For karaf 3 we have the common scheme of <subject>:<command> or <subject>:<secondary-subject>-<command>. For example adding feature repos now is:

feature:repo-add <url or short name> ?<version>

Instead of features:chooseurl and features:addurl.

The various dev commands are now moved to the subjects they affect. Like bundle:watch instead of dev:watch or system:property instead of dev:system-property.

JDBC commands

Karaf 3 allows to directly interact with jdbc databases from the shell. Examples are creating a datasource. Executing a sql command, showing the results of a sql query. For more details see blog article from JB: New enterprise JDBC feature.

JMS commands

Similar to jdbc karaf 3 now contains commands for jms interactions from the shell. You can create connection factories, send and consume messages. See blog article from JB : new enterprise jms feature.

Role based access control for commands and services

In karaf 2.x every user with shell access can use every command, OSGi services are not protected at all. Karaf 3 contains role based access control for commands and services. So for example you can define a group of users that can only list bundles and do other non admin task by simply changing some configuration files. Similar you can protect any OSGi service so it can only be called from a process with successful jaas login and with the correct roles. More details about this feature can be found at

Diagnostics for blueprint and spring dm

In karaf 2.x it was difficult to diagnose problems with bundles using blueprint and spring dm. Karaf 3 now has the simple bundle:diag command that lists diagnostics about all bundles that did not start. For example you can see that a blueprint bundle waits for a namespace or that a blueprint file has a syntax error. Simply try this the next time your bundles do not work like expected.

Features for persistence frameworks

Karaf 3 now has features for openjpa and hibernate. So along with the already present jpa and jta features this makes it easy to install everything you need to do jpa based persistence.

Features for CDI and EJB

The cdi feature installs pax cdi. This allows to use the full set of CDI annotations including any portable extensions in Apache Karaf. The openjpa feature allows to even install openejb for full ejb support on Apache Karaf.

This only lists some of the most noteable features of karaf 3. There is a lot more to discover. Take your time and dig around the features and commands.

In this talk from WJAX 2013 I show best practices for OSGi development in a practical example based around an online voting application.

The UI allows to vote on a topic and shows the existing votes in a diagram. It is done in Javascript and HTML using jQuery and google graph. Additionally votes can be sent using twitter, irc and karaf commands. The image below shows how to vote for the topic camel using your twitter status. 

The architecture of the example follows the typical separation of model, service layer and front end.

Architecture voting

In the talk I explain the difficulties people typically face with OSGi and how to solve them using karaf, maven bundle plugin and blueprint.

By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

Example on github

Introducing the example

Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.


As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

Installing the service

To keep things simple we will install container A and B on the same system.

  • Download Apache Karaf 3.0.3
  • Unpack karaf into folder container_a
  • Start bin/karaf
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
  • feature:repo-add cxf-dosgi 1.6.0
  • feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
  • feature:repo-add
  • feature:install example-tasklist-persistence

After these commands the tasklist persistence service should be running and be published on zookeeper.

You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

Installing the UI

  • Unpack into folder container_b
  • Start bin/karaf
  • config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • feature:repo-add cxf-dosgi 1.6.0
  • feature:install cxf-dosgi-discovery-distributed
  • feature:repo-add
  • feature:install example-tasklist-ui

The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

How does it work


The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

On thursday I had a talk about Apache Camel at W-JAX in Munich. Like on the last conferences there was a lot of interest in Camel and the room was really full. You can find the slides "Integration ganz einfach mit Apache Camel" here and the sources for the examples on github.

On Friday I joined the Eclipse 4 RCP workshop from Kai Tödter. Learned a lot about the new Eclipse. At last Eclipse RCP programming is becoming easier.

I just did my ApacheCon talk about OSGi best practices.It was the last slot but the room was still almost full. In general the OSGi track had a lot of listeners and there were a lot of talks that involved Apache Karaf. So I think that is a nice sign for greater adoption of OSGi and Karaf.

You can find the Slides at google docs.

The demo application can be found inside my Karaf tutorial code at github.

Practical Camel example that polls from a database table and sends the contents as XML to a jms queue. The route uses a JTA transaction to synchronize the DB and JMS transactions. An error case shows how you can handle problems.

Route and Overview


The route starts with a jpa endpoint. It is configured with the fully qualified name of a JPA @Entity. From this entity camel knows the table to poll and how to read and remove the row. The jpa endpoint polls the table and creates a Person object for each row it finds. Then it calls the next step in the route with the Person object as body. The jpa component also needs to be set up separately as it needs an EntityManagerFactory.

The onException clause makes the route do up to 3 retries with backoff time increasing by factor 2 each time. If it still fails the message is passed to a file in the error directory.

The next step transacted() marks the route as transactional it requires that a TransactedPolicy is setup in the camel context. It then makes sure all steps in the route have the chance to participate in a transaction. So if an error occurs all actions can be rolled back. In case of success all can be committed together.

The marshal(df) step converts the Person object to xml using JAXB. It references a dataformat df that sets up the JAXBContext. For brevity this setup is not shown here.

The ExceptionDecidet bean allows to trigger an exception if the name of the person is error. So this allows us to test the error handling later.

The last step to("jms:person") sends the xml representation of person to a jms queue. It requires that a JmsComponent named jms is setup in the camel context.

This second route simply listens on the person queue, reads and displays the content. In a production system this part would tpyically be in another module.

Person as JPA Entity JAXB class

The Person class acts as a JPA Entity and as a JAXB annotated class. This allows us to use it in the camel-jpa component as well as during the marshalling. Keep in mind though that this
would rather be a bad practice in production as it would tie the DB model and the format of the JMS message together. So for real integrations it would be better to have separate beans for JPA and JAXB and do a manual
conversion between them.

DataSource and ConnectionFactory setup

We use an XADataSource for Derby (See As the default ConnectionDactory provided by ActiveMQ in Karaf is not XA ready we define the broker and ConnectionFactory definition by hand (See Together with the Karaf transaction feature these provide the basis to have JTA transactions.

JPAComponent, JMSComponent and transaction setup

An important part of this example is to use the jpa and jms components in a JTA transaction. This allows to roll back both in case of an error.
Below is the blueprint context we use. We setup the JMS component with a ConnectionFactory referenced as an OSGi service.
The JPAComponent is setup with an EntityManagerFactory using the jpa:unit config from Aries JPA. See Apache Karaf Tutorial Part 6 - Database Access for how this works.
The TransactionManager proviced by Aries transaction is referenced as an OSGi service, wrapped as a spring PlattformTransactionManager and injected into the JmsComponent and JPAComponent.

Running the Example

You can find the full example on github : JPA2JMS Example
Follow the Readme.txt to install the necessary Karaf features, bundles and configs.

Apart from fthis example we also install the dbexamplejpa. This allows us to use the person:add command defined there to populate the database table.
Open the Karaf console and type:

You should then see the following line in the log:

So what happened

We used the person:add command to add a row to the person table. Our route picks up this record, reads and converts it to a Person object. Then it marshals it into xml and sends to the jms queue person.
Our second route then picks up the jms message and shows the xml in the log.

Error handling

The route in the example contains a small bean that reacts on the name of the person object and throws an exception if the name is "error".
It also contains some error handling so in case of an exception the xml is forwarded to an error directory.

So you can type the following in the Karaf:

This time the log should not show the xml. Instead it should appear as a file in the error directory below your karaf installation.


In this tutorial the main things we learned are how to use the camel-jpa component to write as well as to poll from a database and how to setup and use jta transactions to achieve solid error handling.

Back to Karaf Tutorials

Yesterday evening I did a talk about Apache Karaf and OSGi best practice together with Achim Nierbeck. Achim did the first part about OSGi basics and Apache Karaf and I did the second part about OSGi best practices.

One slide from the presentation about Karaf shows the big number of features that can be installed easily. So while the Karaf download is just about 8 MB you can install additional features transparently using maven that make it a full blown integration or enterprise application server.

OSGi best practices

In my part I showed how blueprint, OSGi Services and the config admin service can be used together to build a small example application consisting of the typical modules model, persistence and UI like shown below.

Except for the UI the example was from my first Karaf tutorial. While in the tutorial I used a simple Servlet UI that merely is able to display the Task objects I wanted to show a fancier UI for this talk. Since I met the makers of Vaadin on the last W-JAX conferences I got interested in this simple but powerful framework. So I gave it as spin. I had only about two days to prepare for the talk so I was not sure if I would be able to create a good UI with it. Fortunately it was really easy to use and it just took me about a day to learn th basics and build a full CRUD UI for my Task example complete with data binding to the persistence service.

One additional challenge was to use vaadin in OSGi. The good thing is that it is already a bundle. So a WAB (Web application bundle) deployment of my UI would have worked. I wanted it to be pure OSGi though so I searched a bit a found the vaadinbrige from Neil Bartlet. It allows to simply create a vaadin Application and factory class in a normal bundle and publish it as a service. The bridge will then pick it up and publish it to the HttpService.

The end result looks like this:

So you have a table with the current tasks (or to do items). You can add and delete tasks with the menu bar. When you select a task you can edit it in the form below. Any changes are directly sent to the service and updated in the UI.
The nice thing about vaadin is that it handles the complete client server communication and databinding for you. So this whole UI takes only about 120 lines of code. See ExampleApplication on github.

So the general idea of my part of the talk was to show how easy it is to create nice looking and architecturally sound applications using OSGi and Karaf. Many people still think OSGi will make your live harder for normal applications. I hope I could show that when using the right practices and tools OSGi can even be simpler and more fun than Servlet Container or Java EE focused development.

I plan to add a little more extensive Tutorial about using Vaadin on OSGi to my Karaf Tutorial series soon so stay tuned.

Presentation: ApacheKaraf.pdf

Source Code:

Vaadin UI:

Tasklist Model and Persistence:

Achim adapted another Vaadin OSGi example from Kai Tödter to Maven and Karaf:

Nach dem Talk auf der letzten W-JAX hatte ich nun die Gelegenheit, auch auf der JAX über Apache Camel zu sprechen. Diesmal hatte ich einen grösseren Raum zur Verfügung, der mit fast 200 Zuhörern auch gut gefüllt war. Dies zeigt das grosse Interesse an Apache Camel. Die Präsentation ist direkt im Anhang verfügbar. Diesmal ging ich stärker auf OSGi und Apache Karaf als Ablaufumgebung ein. Ich hatte auch nur 20 Folien und verwendete einen größeren Teil der Zeit für Live Demos. Der Vortrag wurde auch gefilmt und sollte bald auf der JAX Website verfügbar sein. Ich werde dann eine Aktualisierung mit Link posten.

Nach dem geplanten Ende des Vortrags war ein freier Zeitblock. Viele der Zuhörer blieben noch um Fragen zu stellen und ich zeigte auch noch einige tiefer gehende Beispiele zu Bean Integration und Pojo Messaging. Als Resumé kann ich sagen, dass Apache Camel sehr beliebt ist und besonders Entwickler und Architekten den Einsatz treiben während das Management noch oft auf große Kommerzielle Frameworks setzt. Apache Karaf wird als sehr interesante Deploymentumgebung wahrgenommen. In den meisten Fällen gibt es allerdings Schwierigkeiten mit Operations bei der produktiven Einführung, da Apache Karaf und OSGi noch recht wenig verbreitet sind und damit eine zusätzliche Serverlandschaft darstellen.

Präsentation: Apache Camel JAX12.pdf


Today the 2.2.6 Version of Karaf was released. It incorporates more than 80 fixed jira issues.

One important usability improvement is the features:chooseurl command. It allows to add feature files for well known products like Apache Camel or Apache CXF in a much simpler way than the features:addurl command.

For example to install the current Apache Camel you just have to type:

Besides camel we currently already support activemq, cxf, jclouds, openejb and wicket.

In fact the command simply uses a config file etc/org.apache.karaf.features.repos.cfg that maps a product name to the feature file url and sets it with the given version number. So if some product you would like to use is missing you can simply add it yourself. If it is interesting for others too then please create a jira issue so we add it in the next distro.

Currently the chooseurl command already has completion for the product name but not for the version number. We plan to add completion for the version number in one of the next Karaf releases by evaluating the version in the maven repos.

Btw. for camel and cxf you still have to replace the etc/ with the etc/ file to change some package exports of the system bundle.

This wednesday on the 4th of April I will give a talk about the open source integration framework Apache Camel at the Java User Group in Karlsruhe. I will start with an overview of Camel and give some insight in the Camel Architecture. The main part of the Talk will be live coding showing how easy integration can be with the Camel DSL.

See the webpage of the JUG Karlsruhe for some more details: