Karaf provides enterprise ready features like shell console, remote access, hot deployment, dynamic configuration and many more. Karaf subprojects provide additional features like clustering, complete monitoring and alerting, application repository.
Let's start !
Karaf Runtime is a modern and polymorphic application runtime.
Karaf Cellar is a clustering solution for Karaf.
Karaf Cave is an artefact repository.
Karaf Decanter provides ready to use monitoring solution.
Karaf Runtime is a modern and polymorphic application runtime. It's a lightweight, powerful, and enterprise ready.
By polymorphic, it means that Karaf can host any kind of applications : WAR, Spring, OSGi, and much more.
Karaf can be use as a standalone immuatable runtime, or in a mutable runtime that you can manage remotely.
View on GitHub »
You can directly drop your applications in the Karaf deploy folder, they will be directly deployed for you. You can also create your own deployer.
All configurations (for Karaf itself and applications) are located in the Karaf etc folder. All changes in the configuration files are taken on the flyv: no need to restart.
Karaf uses a centralized logging back end, supporting popular frameworks (log4j, slf4j, logback, ...).
Karaf provides a very convenient way to provision applications: the Karaf Features.
Karaf provides a complete Unix like shell console, allowing to manage your container and applications. This shell supports completion, contextual help, key bindings, and much more.
Karaf embeds a SSH server allowing you to remotely access the shell using any SSH client. On the other hand, Karaf also provides JMX MBean server allowing to manage the container using any JMX client.
In addition of ssh and JMX, you can also manage Karaf Runtime using a simple browser thanks to the Karaf WebConsole.
Karaf fully supports JAAS based security framework. It also supports a complete RBAC system for shell commands and JMX objects. You can directly use this security layer in your own applications.
You can manage several children instances inside the Karaf Runtime root instance. It's a very convenient way to test applications or configurations without impacting your existing running instances.
Manage your Docker containers and images via Karaf shell console, provisionning a running instance in the simpliest way.
You have bunch of Karaf Runtime instances running ? You want to manage those instances as one, spreading the configuration, deployment, etc ? Karaf Cellar is for you.
Karaf Cellar is a clustering solution for Karaf. It allows you to manage multiple instances, with synchronization between the instances.
Each Karaf node is discovered automatically by the others, supporting different mechanisms (multicast, unicast, whiteboard, ...).
You can target the synchronization on a subset of nodes using cluster groups.
Cellar is able to synchronize and distribute applications (features, bundles, non-OSGi application), configuration, or local event.
Cellar support DOSGi (Distributed OSGi), allowing you to implement remote call between your applications.
Karaf Cave is an implementation of OSGi Repository specification. It can be used by the Karaf Features Resolver to provide resources, dealing with the requirements and capabilities of artifacts.
Cave includes a pluggable storage back end.
Cave is able to generate the metadata for a complete repository.
Cave provides a complete Maven repository support.
In addition of a complete repository, Cave is able to proxy an existing repository, adding the metadata.
You need a monitoring solution for Karaf and related ? You need a BAM (Business Activity Monitoring) platform for your application ? Karaf Decanter can be very convenient for you !
Decanter provides ready to use monitoring solution. It's also completely extensible and customizable.
You can learn more about Karaf Decanter in this ApacheCon talk slideshow by Jean-Baptiste Onofré : View presentation
The collectors harvest the monitored data (JMX metrics, log messages, ...).
A dispatcher (powered by OSGi EventAdmin) forwarding the collected data to the appenders and SLA.
The appenders receive the collected data and store data into a back end (elasticsearch, cassandra, JDBC, ...).
SLA (Service Level Agreement) is a special kind of appender, checking the collected data, and eventually raising an alert (to a back end).