The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.
This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.
You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint
Defines the karaf features to install the example as well as all necessary dependencies.
The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.
Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.
One other important thing is the configuration for the maven-bundle-plugin.
The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.
The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.
In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.
TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.
The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.
A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.
The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.
Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.
The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.
The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.
@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.
mvn clean install
Installation and test
See Readme.txt on github.
Some time ago I did some CXF performance measurements. See How fast is CXF ? - Measuring CXF performance on http, https and jms.
For cxf 3.0.0 I did some massive changes on the JMS transport. So I thought it is a good time to compare cxf 2.x and 3 in JMS performance. My goal was to reach at least the original performance. As my test system is different now I am also measuring the cxf 2.x performance again to have a good comparison.
Dell Precision with Intel Core i7, 16 GB Ram, 256 GB SSD running ubuntu Linux 13.10.
I am using a new version of my performance-tests project on github.
The test runs on one machine using one activemq Server, one test server and one test client.
The test calls the example cxf CustomerService.
The following call types are supported:
Asynchronous one way call. Sends one soap message to server
List<Customer> customers = customerService.getCustomersByName("test2");
Synchronous request reply. Sends one soap message to server and waits for the reply
Future<GetCustomersByNameResponse> resp = customerService.getCustomersByNameAsync("test2");
Sends an asynchronous request reply. Sends one soap message to server and returns without waiting.
The requests above are sent using an executor with a fixed number of threads.
For the test you can specify the total number of messages, the number of threads and the call type.
First the number of requests are sent for warmup and then for the real measured test.
To run the test with cxf 3.0.0-SNAPSHOT you have to compile cxf from source.
1. Run a standalone activemq 5.9.0 server with the activemq.xml from the github sources above.
2. Start the jms server in a new console from the project source using:
3. Start the jms client using:
The test is executed with several combinations of the parameters. Using the pom property cxf.version we also switch between cxf 2.7.10 and cxf 3.0.0-SNAPSHOT.
The first interesting fact here is that one way messaging does not profit from the number of threads. One thread already seems to achieve the same performance like 40 threads. This is quit intuitive as activemq needs to synchronize the calls on the one thread holding the jms connection. On the other hand using more processes also does not seem to improve the performance so we seem to be quite at the limit of activemq here which is good.
For request reply the performance seems to scale with the number threads. This can be explained as we have to wait for the response and can use this time to send some more requests.
One really astonishing thing here is that CXF 2.7.10 seems to be really bad when using synchronous request reply. This is because it uses consumer.receive in this case while it uses a jms message listener for async calls. So the jms message listener seems to perform much better than the consumer.receive case. For CXF 2.7.10 this means we can speed up our calls if we use the asynchronous interface even if it is more inconvenient.
The most important observation here is that CXF 3 performs a lot better for the synchronous request reply case. It is as fast as for the asynchronous case. The reason is that we now also use a message listener for synchronous calls as long as our correlation id is based on the conduit id prefix. This is the default so this case is vastly improved. CXF 3 is up to 5 times faster than CXF 2.7.10.
There is one down side still. If you use message id as correlation id or a user correlation id set on the cxf message then cxf 3 will switch back to consumer.receive and will be as slow as CXF 2 again.
Apache karaf is an open source OSGi server developed by the Apache foundation. It provides very convenient management functionality on top of existing OSGi frameworks. Karaf is used in several open source and commercial solutions.
Like often convenience and security do not not go well together. In the case of karaf there is one known security hole in default installations that was introduced to make the initial experience with karaf very convenient. Karaf by default starts an ssh server. It also delivers a bin/client command that is mainly meant to connect to the local karaf server without a password.
Is your karaf server vulnerable?
Some simple steps to check if your karaf installations is open.
- Check the "etc/org.apache.karaf.shell.cfg" for the attribute sshPort. Note this port number. By default it is 8101
- Do "ssh -p 8101 karaf@localhost". Like expected it will ask for a password. This may also be dangerous if you do not change the default password but is quite obvious.
- Now just do bin/client -a 8101. You will get a shell without supplying a password. If this works then your server is vulnerable
How does it work
The client command has a built in ssh private key which is used when connecting to karaf. There is a config "etc/keys.properties" in karaf which defines the public keys that are allowed to connect to karaf.
Why is this dangerous?
The private key inside the client command is fixed and publicly available. See karaf.key. As the mechanism also works with remote connections "bin/client -a 8101 -h hostname" this means that anyone with access to your server ip can remotely control your karaf server. As the karaf shell also allows to execute external programs (exec command) this even allows further access to your machine.
How to secure your server ?
Simply remove the public key of the karaf user in the "etc/keys.properties". Unfortunately this will stop the bin/client command from working.
Also make sure you change the password of the karaf user in "etc/users.properties".
Nicely timed as a christmas present Apache Karaf 3.0.0 was released on the 24th of December. As a user of karaf 2.x you might ask yourself why to switch to the new major version. Here are 10 reasons why the switch is worth the effort.
- 1 External dependencies are cached locally now
- 2 Delay shell start till all bundles are up and running
- 3 Create kar archives from existing features
- 4 More consistent commands
- 5 JDBC commands
- 6 JMS commands
- 7 Role based access control for commands and services
- 8 Diagnostics for blueprint and spring dm
- 9 Features for persistence frameworks
- 10 Features for CDI and EJB
External dependencies are cached locally now
One of the coolest features of karaf is that it can load features and bundles from a maven repository. In karaf 2.x the drawback was though that external dependencies thaat are not already in the system dir and local maven repo were always loaded from the external repo. Karaf 3 now uses the real maven artifact resolution. So it automatically caches downloaded artifacts in the local maven repo. So the artifacts only have to be loaded the first time.
Delay shell start till all bundles are up and running
A typical problem in karaf 2.x and also karaf 3 with default settings is that the shell comes up before all bundles are started. So if you enter a command you migh get an error that the command is unknown - simply because the respective bundle is not yet loaded. In karaf 3 you can set the property "karaf.delay.console=true". Karaf will then show a progress bar on startup and start the console when all bundles are up and running. If you are in a hurry you can still type enter to start the shell faster.
Create kar archives from existing features
If you need some features for offline deploymeent then kar files are a nice alternative to setting up a maven repo or copying everything to the system dir. Most features are not available as kar files though. In karaf 3 the kar:create command allows to create a kar file from any installed feature repository. Kar files now also can be defined as pure repositories. So they can be installed without installing all contained features.
feature:repo-add camel 2.12.2
A kar file with all camel features will be created below data/kar. You can also select which features to include.
More consistent commands
In karaf 2.x the command naming was not very consistent. For karaf 3 we have the common scheme of <subject>:<command> or <subject>:<secondary-subject>-<command>. For example adding feature repos now is:
feature:repo-add <url or short name> ?<version>
Instead of features:chooseurl and features:addurl.
The various dev commands are now moved to the subjects they affect. Like bundle:watch instead of dev:watch or system:property instead of dev:system-property.
Karaf 3 allows to directly interact with jdbc databases from the shell. Examples are creating a datasource. Executing a sql command, showing the results of a sql query. For more details see blog article from JB: New enterprise JDBC feature.
Similar to jdbc karaf 3 now contains commands for jms interactions from the shell. You can create connection factories, send and consume messages. See blog article from JB : new enterprise jms feature.
Role based access control for commands and services
In karaf 2.x every user with shell access can use every command, OSGi services are not protected at all. Karaf 3 contains role based access control for commands and services. So for example you can define a group of users that can only list bundles and do other non admin task by simply changing some configuration files. Similar you can protect any OSGi service so it can only be called from a process with successful jaas login and with the correct roles. More details about this feature can be found at http://coderthoughts.blogspot.de/2013/10/role-based-access-control-for-karaf.html.
Diagnostics for blueprint and spring dm
In karaf 2.x it was difficult to diagnose problems with bundles using blueprint and spring dm. Karaf 3 now has the simple bundle:diag command that lists diagnostics about all bundles that did not start. For example you can see that a blueprint bundle waits for a namespace or that a blueprint file has a syntax error. Simply try this the next time your bundles do not work like expected.
Features for persistence frameworks
Karaf 3 now has features for openjpa and hibernate. So along with the already present jpa and jta features this makes it easy to install everything you need to do jpa based persistence.
Features for CDI and EJB
The cdi feature installs pax cdi. This allows to use the full set of CDI annotations including any portable extensions in Apache Karaf. The openjpa feature allows to even install openejb for full ejb support on Apache Karaf.
In this talk from WJAX 2013 I show best practices for OSGi development in a practical example based around an online voting application.
The architecture of the example follows the typical separation of model, service layer and front end.
In the talk I explain the difficulties people typically face with OSGi and how to solve them using karaf, maven bundle plugin and blueprint.
- Slides Best practices für Services und Integration in OSGi (in german)
- for english slides check out my apache con presentation OSGI Best practices shown on Apache Karaf.
- Check out the code of the voting example at github
By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).
Introducing the example
Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.
Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.
As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.
Installing the service
To keep things simple we will install container A and B on the same system.
- Download Apache Karaf 2.2.10
- Unpack into folder container_a
- Copy etc/jre.properties.cxf into etc/jre.properties
- Start bin/karaf
- config:propset -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
- config:propset -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
- features:chooseurl cxf-dosgi 1.4.0
- features:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
- features:addurl mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
- features:install example-tasklist-persistence
After these commands the tasklist persistence service should be running and be published on zookeeper.
You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.
Installing the UI
- Unpack into folder container_b
- Copy etc/jre.properties.cxf into etc/jre.properties
- Start bin/karaf
- config:propset -p org.ops4j.pax.web org.osgi.service.http.port 8182
- config:propset -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
- features:chooseurl cxf-dosgi 1.4.0
- features:install cxf-dosgi-discovery-distributed
- features:addurl mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
- features:install example-tasklist-ui
The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.
How does it work
The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.
By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.
The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.
The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.
So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.
On thursday I had a talk about Apache Camel at W-JAX in Munich. Like on the last conferences there was a lot of interest in Camel and the room was really full. You can find the slides "Integration ganz einfach mit Apache Camel" here and the sources for the examples on github.
On Friday I joined the Eclipse 4 RCP workshop from Kai Tödter. Learned a lot about the new Eclipse. At last Eclipse RCP programming is becoming easier.
I just did my ApacheCon talk about OSGi best practices.It was the last slot but the room was still almost full. In general the OSGi track had a lot of listeners and there were a lot of talks that involved Apache Karaf. So I think that is a nice sign for greater adoption of OSGi and Karaf.
You can find the Slides at google docs.
The demo application can be found inside my Karaf tutorial code at github.
Practical Camel example that polls from a database table and sends the contents as XML to a jms queue. The route uses a JTA transaction to synchronize the DB and JMS transactions. An error case shows how you can handle problems.
Route and Overview
The route starts with a jpa endpoint. It is configured with the fully qualified name of a JPA @Entity. From this entity camel knows the table to poll and how to read and remove the row. The jpa endpoint polls the table and creates a Person object for each row it finds. Then it calls the next step in the route with the Person object as body. The jpa component also needs to be set up separately as it needs an EntityManagerFactory.
The onException clause makes the route do up to 3 retries with backoff time increasing by factor 2 each time. If it still fails the message is passed to a file in the error directory.
The next step transacted() marks the route as transactional it requires that a TransactedPolicy is setup in the camel context. It then makes sure all steps in the route have the chance to participate in a transaction. So if an error occurs all actions can be rolled back. In case of success all can be committed together.
The marshal(df) step converts the Person object to xml using JAXB. It references a dataformat df that sets up the JAXBContext. For brevity this setup is not shown here.
The ExceptionDecidet bean allows to trigger an exception if the name of the person is error. So this allows us to test the error handling later.
The last step to("jms:person") sends the xml representation of person to a jms queue. It requires that a JmsComponent named jms is setup in the camel context.
This second route simply listens on the person queue, reads and displays the content. In a production system this part would tpyically be in another module.
Person as JPA Entity JAXB class
The Person class acts as a JPA Entity and as a JAXB annotated class. This allows us to use it in the camel-jpa component as well as during the marshalling. Keep in mind though that this
would rather be a bad practice in production as it would tie the DB model and the format of the JMS message together. So for real integrations it would be better to have separate beans for JPA and JAXB and do a manual
conversion between them.
DataSource and ConnectionFactory setup
We use an XADataSource for Derby (See https://github.com/cschneider/Karaf-Tutorial/blob/master/db/datasource/datasource-derby.xml). As the default ConnectionDactory provided by ActiveMQ in Karaf is not XA ready we define the broker and ConnectionFactory definition by hand (See https://github.com/cschneider/Karaf-Tutorial/blob/master/cameljpa/jpa2jms/localhost-broker.xml). Together with the Karaf transaction feature these provide the basis to have JTA transactions.
JPAComponent, JMSComponent and transaction setup
An important part of this example is to use the jpa and jms components in a JTA transaction. This allows to roll back both in case of an error.
Below is the blueprint context we use. We setup the JMS component with a ConnectionFactory referenced as an OSGi service.
The JPAComponent is setup with an EntityManagerFactory using the jpa:unit config from Aries JPA. See Apache Karaf Tutorial Part 6 - Database Access for how this works.
The TransactionManager proviced by Aries transaction is referenced as an OSGi service, wrapped as a spring PlattformTransactionManager and injected into the JmsComponent and JPAComponent.
Running the Example
Apart from fthis example we also install the dbexamplejpa. This allows us to use the person:add command defined there to populate the database table.
Open the Karaf console and type:
You should then see the following line in the log:
So what happened
We used the person:add command to add a row to the person table. Our route picks up this record, reads and converts it to a Person object. Then it marshals it into xml and sends to the jms queue person.
Our second route then picks up the jms message and shows the xml in the log.
The route in the example contains a small bean that reacts on the name of the person object and throws an exception if the name is "error".
It also contains some error handling so in case of an exception the xml is forwarded to an error directory.
So you can type the following in the Karaf:
This time the log should not show the xml. Instead it should appear as a file in the error directory below your karaf installation.
In this tutorial the main things we learned are how to use the camel-jpa component to write as well as to poll from a database and how to setup and use jta transactions to achieve solid error handling.
Back to Karaf Tutorials
Yesterday evening I did a talk about Apache Karaf and OSGi best practice together with Achim Nierbeck. Achim did the first part about OSGi basics and Apache Karaf and I did the second part about OSGi best practices.
One slide from the presentation about Karaf shows the big number of features that can be installed easily. So while the Karaf download is just about 8 MB you can install additional features transparently using maven that make it a full blown integration or enterprise application server.
OSGi best practices
In my part I showed how blueprint, OSGi Services and the config admin service can be used together to build a small example application consisting of the typical modules model, persistence and UI like shown below.
Except for the UI the example was from my first Karaf tutorial. While in the tutorial I used a simple Servlet UI that merely is able to display the Task objects I wanted to show a fancier UI for this talk. Since I met the makers of Vaadin on the last W-JAX conferences I got interested in this simple but powerful framework. So I gave it as spin. I had only about two days to prepare for the talk so I was not sure if I would be able to create a good UI with it. Fortunately it was really easy to use and it just took me about a day to learn th basics and build a full CRUD UI for my Task example complete with data binding to the persistence service.
One additional challenge was to use vaadin in OSGi. The good thing is that it is already a bundle. So a WAB (Web application bundle) deployment of my UI would have worked. I wanted it to be pure OSGi though so I searched a bit a found the vaadinbrige from Neil Bartlet. It allows to simply create a vaadin Application and factory class in a normal bundle and publish it as a service. The bridge will then pick it up and publish it to the HttpService.
The end result looks like this:
So you have a table with the current tasks (or to do items). You can add and delete tasks with the menu bar. When you select a task you can edit it in the form below. Any changes are directly sent to the service and updated in the UI.
The nice thing about vaadin is that it handles the complete client server communication and databinding for you. So this whole UI takes only about 120 lines of code. See ExampleApplication on github.
So the general idea of my part of the talk was to show how easy it is to create nice looking and architecturally sound applications using OSGi and Karaf. Many people still think OSGi will make your live harder for normal applications. I hope I could show that when using the right practices and tools OSGi can even be simpler and more fun than Servlet Container or Java EE focused development.
I plan to add a little more extensive Tutorial about using Vaadin on OSGi to my Karaf Tutorial series soon so stay tuned.
Tasklist Model and Persistence: https://github.com/cschneider/Karaf-Tutorial/tree/master/tasklist
Achim adapted another Vaadin OSGi example from Kai Tödter to Maven and Karaf: https://github.com/ANierbeck/osgi-vaadin-demo
Nach dem Talk auf der letzten W-JAX hatte ich nun die Gelegenheit, auch auf der JAX über Apache Camel zu sprechen. Diesmal hatte ich einen grösseren Raum zur Verfügung, der mit fast 200 Zuhörern auch gut gefüllt war. Dies zeigt das grosse Interesse an Apache Camel. Die Präsentation ist direkt im Anhang verfügbar. Diesmal ging ich stärker auf OSGi und Apache Karaf als Ablaufumgebung ein. Ich hatte auch nur 20 Folien und verwendete einen größeren Teil der Zeit für Live Demos. Der Vortrag wurde auch gefilmt und sollte bald auf der JAX Website verfügbar sein. Ich werde dann eine Aktualisierung mit Link posten.
Nach dem geplanten Ende des Vortrags war ein freier Zeitblock. Viele der Zuhörer blieben noch um Fragen zu stellen und ich zeigte auch noch einige tiefer gehende Beispiele zu Bean Integration und Pojo Messaging. Als Resumé kann ich sagen, dass Apache Camel sehr beliebt ist und besonders Entwickler und Architekten den Einsatz treiben während das Management noch oft auf große Kommerzielle Frameworks setzt. Apache Karaf wird als sehr interesante Deploymentumgebung wahrgenommen. In den meisten Fällen gibt es allerdings Schwierigkeiten mit Operations bei der produktiven Einführung, da Apache Karaf und OSGi noch recht wenig verbreitet sind und damit eine zusätzliche Serverlandschaft darstellen.
Präsentation: Apache Camel JAX12.pdf
One important usability improvement is the features:chooseurl command. It allows to add feature files for well known products like Apache Camel or Apache CXF in a much simpler way than the features:addurl command.
For example to install the current Apache Camel you just have to type:
Besides camel we currently already support activemq, cxf, jclouds, openejb and wicket.
In fact the command simply uses a config file etc/org.apache.karaf.features.repos.cfg that maps a product name to the feature file url and sets it with the given version number. So if some product you would like to use is missing you can simply add it yourself. If it is interesting for others too then please create a jira issue so we add it in the next distro.
Currently the chooseurl command already has completion for the product name but not for the version number. We plan to add completion for the version number in one of the next Karaf releases by evaluating the version in the maven repos.
Btw. for camel and cxf you still have to replace the etc/jre.properties with the etc/jre.properties.cxf file to change some package exports of the system bundle.
This wednesday on the 4th of April I will give a talk about the open source integration framework Apache Camel at the Java User Group in Karlsruhe. I will start with an overview of Camel and give some insight in the Camel Architecture. The main part of the Talk will be live coding showing how easy integration can be with the Camel DSL.
See the webpage of the JUG Karlsruhe for some more details: http://jug-karlsruhe.mixxt.de/networks/events/show_event.55045
Camel has many options for deployment. If I have the freedom of choice I prefer to run Camel on Karaf but the typical case at customers is that they have a certain app server and we have to fit in. In this case the platform was JBoss 5.1. Before Camel 2.8 this was quite complicated as camel tried to scan for typeconverters on the classpath and that part failed because of the JBoss class loader. I used camel 2.8.4 and so this was no issue except for a little problem I will come back to later.
Packaging Camel integrations as a war and using camel-servlet.
The most suitable deployment option on JBoss is to package your integration in a war archive and install it e.g. using the deploy folder. The camel-example-servlet-tomcat is the best starting point for that kind of project. It shows how to create wars using maven and also shows how to start camel from a spring application context in a servlet environment. If you know the older camel servlet examples you will notice that the way camel is started has changed recently. In older versions you had to configure the spring context xml in the camel servlet which is a rather uncommon way to start spring. The current example now shows how to start camel with the default spring context loader listener which is the much better solution.
Installing and using the ActiveMQ connection factory using jndi
The easiest way to install a connection factory for a camel integration is to define the connection factory in your spring context. This has some drawbacks though. One is that you then depend directly on ActiveMQ and can not simply replace it by another broker. The other is that
the developer has access to the password of the connection factory. Of course you can use a property file to extract this still it is not ideal. So the prefered way to find a connection factory in an app server environment is to look it up in jndi. In the spring context this is quite simple using the jee namespace:
A bigger problem is how to install the connection factory in JBoss so it is available in jndi. There are two ways to achieve this. One is to install it as a JEE resource adapter (see http://activemq.apache.org/jboss-integration.html). I don´t like this solution too much as it is quite complicated and requires the special activemq-ra.rar.
After a lot of searching on the net I found a nice solution.
This JBoss mbean description initializes an ActiveMQXAConnectionFactory and installs it in the jndi context. It obviously needs the activemq-core-5.5.0.jar which I simply installed in the lib dir of the default server. The problem with this is of course that it is then available on the classpath of all wars you also install. So if some activemq specialist knows a better solution to have it only on the classpath for the mbean config I would really like to know how to do this.
When I experimented with this setup I first used the activemq-all-5.5.0.jar. The problem was that it contains an older camel version. So when I installed my camel war it was not able to start
because of problems loading typeconverters from this jar. So remember to only use the core jar.
To make it easy for you to test this out yourself I have put the code of a simple producer and consumer project on github.
To install do the following steps:
- download activemq-5.5.0, extract and start
- download jboss 5.1 and extract it
- download the example projects and build them using mvn install
- copy the jms-jboss-beans.xml to the default/deploy folder
- copy activemq-5.5.0.jar to the default/lib folder
- checkout or download https://github.com/cschneider/cameljbossha
- Build the example project with mvn clean install
- copy the war files /consumer/target/consumer-1.0.0.war and /producer/target/producer-1.0.0.war to the default/deploy folder
- start jboss
The producer offers a servlet where we can trigger a message. So open a browser and go to http://localhost:8080/producer-1.0.0/camel/tojms . The jboss log will show that the request is handled and a jms message is sent. The consumer will pick up the message and write a log entry "Message received from jms".
We have seen how to produce war projects with Apache Camel and how the usage of the camel servlet has changed recently. This part is independent from JBoss. Next we have seen that current Camel versions allow to deploy on JBoss without any special tweaks. We learned how to reference a connection factory in jndi and how to install it in JBoss.
The projects producer and consumer will be reused in my next post where I look into concepts for high availability with Camel and ActiveMQ.
CXF 2.6.0 will bring a lot of improvements for deployment in OSGi. Till now cxf was bundled in one OSGi bundle. Either with all features or with a minimal feature set. Thanks to Dan Kulp cxf is now delivered as individual bundles. So it can be installed with only the needed features. Besides the smaller size in many use cases this also means that we have less optional dependencies which make installation difficult. Each bundle defines the imports it really needs. This makes it much easier to get the dependencies right. Of course the Karaf feature file will still be provided to make it easy to install CXF in Apache Karaf.
Based on the work of Dan I recently started to optimize the imports of the typical bundles most people will use from cxf. At the start we had many dependencies like spring, velocity, neethi, .., that I felt should not be needed and make cxf quite big. By refactoring some of the modules I was able to slim these down to the bare minimum. The current code on trunk already reflects these changes.
If you want to try this yourself you can easily install the snapshot of cxf in karaf 2.2.5. As the feature file is not yet changed I uploaded a gist of the commands you need to execute. Remember to also use the jre.properties.cxf for karaf to disable some default java apis so CXF can replace them with newer versions.
So after this install the karaf list -u command shows the following bundles:
This installation of CXF is ready for SOAP/HTTP and REST with JAX-WS and JAXB on the java side which reflects what most people will need.
To test the features I recommend to install the example from my Karaf Tutorial about CXF.