This repository is deprecated. We have moved the build code into the OpenNMS repository. The publish and build workflow is now integrated as part of our CI/CD workflow.
We will archive this repository with Horizon 25 and will no longer maintain this repository.
- Docker Container Image Repository: DockerHub
- Issue- and Bug-Tracking: JIRA
- Source code: GitHub
- Chat: Web Chat
bleeding
, daily bleeding edge version of Horizon 24 using OpenJDK 1124.1.0
, last stable release of Horizon using OpenJDK 11
This repository provides snapshots for Horizon as docker images. The image provides the Horizon core monitoring services and the web application.
It is recommended to use docker-compose
to build a service stack using the official PostgreSQL image.
In case you have already a PostgreSQL database running, you can provide the database configuration in the .opennms.env
and .postgres.env
environment files, otherwise users and database will be created.
Data is by default persisted on your Docker host using a local volume driver for the following data:
# PostgreSQL database
psql.data:
driver: local
# OpenNMS Horizon RRD files, logs and generated PDF reports
opennms.data:
driver: local
# OpenNMS Horizon configuration files
opennms.etc:
driver: local
It is required to manually edit OpenNMS Horizon configuration files, you can add your own configuration files by providing a etc-overlay
directory.
On startup the files overwrite the default configuration.
- ./etc-overlay:/opt/opennms-etc-overlay
If you prefer to have you OpenNMS Horizon configuration on your Docker host in a specific directory, you can mount a directory with your config like this:
volumes:
- ./myHorizonConfig:/opt/opennms/etc
In case the directory is empty, it will be initialized with a default pristine configuration from /opt/opennms/share/etc-pristine
.
IMPORTANT: Take care about configurations which can be changed through the Web UI which are persisted on the file system, e.g. users.xml
, groups.xml
, surveillance-categories.xml
, snmp-config.xml
, etc.
- docker in a current stable version
- docker-compose in a current stable version
- git
git clone https://github.com/opennms-forge/docker-horizon-core-web.git
cd docker-horizon-core-web
docker-compose up -d
The web application is exposed on TCP port 8980. You can login with default user admin with password admin. Please change immediately the default password to a secure password.
To get a help for all available container options just run:
docker run --rm opennms/horizon-core-web
It is easily possible to add Java options to control the behavior of the JVM for performance tuning or debugging.
The environment variable JAVA_OPTS
is passed on the Java command and can be used to extend or overwrite JVM options.
IMPORTANT: To give more control the Java binary command natively in the docker-entrypoint.sh and Java options in opennms.conf
are not evaluated.
The java process has PID 1 and
Used in an environment file:
env_file:
- .java.env
cat .java.env
JAVA_OPTS=-XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintFlagsFinal
Used in docker-compose service environment definition:
opennms:
container_name: opennms.core.web
image: opennms/horizon-core-web:latest
environment:
- JAVA_OPTS=-XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintFlagsFinal
To control and isolate resource usage of processes the Kernel feature cgroups (Control Groups) is used. With the combination of Java there are some additional things to take care of regarding the Maximum Heap Size and limiting memory usage of the container.
By default JVM ergonomics calculates the Maximum Heaps Size based on the Docker host memory and not by the memory set with with cgroups.
To ensure the JVM calculates the Maximum Heap Size correct you have two options:
- Set the correct Maximum Heap Size manually with
-Xmx
see section above Set Java Options - If no -Xmx option is set, you can automatically calculate the Maximum Heap Size with enabling the experimental cgroup aware feature with
JAVA_OPTS=-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
As of Java SE 8u131 the JVM is Docker-aware with Docker CPU limits transparently.
As long if -XX:ParallelGCThreads
or -XX:CICompilerCount
are not specified, the JVM will apply the Docker CPU limit as the number CPUs and calculates the number of GC and JIT compiler threads just like running on bare metal.
The entry point script is used to control starting behavior:
-f: Apply overlay configuration if exist and just start OpenNMS Horizon
-h: Show help
-i: If necessary initialize database, create pristine configuration, do an upgrade and apply the overlay config but do *not* start OpenNMS Horizon
-s: Same as -i but start OpenNMS Horizon, this should be your default
-n: Initialize Newts on Cassandra and the PostgreSQL database, do *not* start OpenNMS Horizon
-c: Same as -n but start OpenNMS Horizon, this should be your default when you want to use Newts on Cassandra
-t: Just test you configuration
If you want to enforce an update, create a /opt/opennms/etc/do-upgrade
file.
Starting with -i
or -s
will run the install -dis
command once to update the configuration and database schema.
All options which do upgrades or start OpenNMS Horizon verify if the configuration is valid and pass the configuration test.
If you just want to maintain custom configuration files outside of OpenNMS, you can use an etc-overlay directory. All files in this directory are just copied into /opt/opennms/etc in the running container. You can just mount a local directory like this:
volumes:
- ./etc-overlay:/opt/opennms-etc-overlay
If you just want to maintain custom configuration files for the Jetty application container, you can use an jetty-overlay directory. All files in this directory are just copied into /opt/opennms/jetty-webapps/opennms/WEB-INF in the running container. You can just mount a local directory like this:
volumes:
- ./jetty-overlay:/opt/opennms-jetty-webinf-overlay
To overlay arbitrary configuration files/directory structures, you can use an opennms-overlay directory. The contents of this directory are copied into /opt/opennms/ in the running container. You can just mount a local directory like this:
volumes:
- ./opennms-overlay:/opt/opennms-overlay
This image allows to test configuration files if they are valid XML and if they can be loaded on startup. It is automatically executed on startup but can also it can just be executed as a single shot command.
Get the usage output of the config tester with:
docker run --rm opennms/horizon-core-web:latest -t
Some examples how to use the config tester with this image:
Test all configuration files:
docker run --rm opennms/horizon-core-web:latest -t -a
Test just a specific configuration file with verbose output -v
:
docker run --rm opennms/horizon-core-web:latest -t -v snmp-config.xml
Test configuration files with a etc-overlay directory:
docker run --rm -v $(pwd)/etc-overlay:/opt/opennms-etc-overlay opennms/horizon-core-web:latest -t -v snmp-config.xml
POSTGRES_HOST
: PostgreSQL database host, default:database
POSTGRES_PORT
: Port to access PostgreSQL database, default:5432
POSTGRES_USER
: PostgreSQL admin user, default:postgres
POSTGRES_PASSWORD
: PostgreSQL admin password, default:postgres
OPENNMS_DBNAME
: Database name for OpenNMS Horizon, default:opennms
OPENNMS_DBUSER
: User to access the OpenNMS Horizon database, default:opennms
OPENNMS_DBPASS
: Password for OpenNMS Horizon database user, default:opennms
OPENNMS_KARAF_SSH_HOST
: Listening address for Karaf SSH port, default:0.0.0.0
OPENNMS_KARAF_SSH_PORT
: SSH port for Karaf, default:8101
Using a Cassandra Cluster:
${OPENNMS_CASSANDRA_HOSTNAMES}
: Host name or IP address of the cassandra cluster, a comma separated list is also accepts${OPENNMS_CASSANDRA_KEYSPACE}
: Name space to persist performance data in, default:newts
${OPENNMS_CASSANDRA_PORT}
: Cassandra port, default:9042
${OPENNMS_CASSANDRA_USERNAME}
: User name accessing Cassandra${OPENNMS_CASSANDRA_PASSWORD}
: Password accessing Cassandra
By default the OpenNMS Horizon image will run using RRDTool for performance data storage. However OpenNMS Horizon can also be configured to run on Cassandra using the Newts time series schema.
The configuration options can be found in the Environment Variables section.
The opennms-cassandra-helm.yml
is provided which illustrates how to run OpenNMS Horizon with a small single Cassandra node on the same machine.
MIRROR_HOST
: Server with RPM packages, default:yum.opennms.org
OPENNMS_VERSION
: Version of OpenNMS Horizon RPM files, default:stable