Applications
Starting Apps for Development
In order to run a component, it is required to run csw-location-server
. Moreover, even to start the Event Service or Configuration Service, csw-location-server
should be running.
A shell script has been provided to make the starting of services more convenient for developers. This script will start the Location Service, and then a set of CSW services that a developer may make use of. All services are started by default, but specific services can be started using the command line options (see below).
Assuming that developer has downloaded csw-apps-<some-version>.zip
from csw releases and unzipped it, there are four folders, as follows, in csw-apps-<some-version>
- bin
- lib
- logging_aggregator
- conf
All the shell scripts provided by CSW reside in bin
folder. The shell script referred in this segment is named as csw-services.sh
. Go to the bin folder and type ./csw-services.sh --help
. This will list all possible options applicable for the script.
Environment variables which are needed to start CSW services are documented in environment variables.
This shell script will start csw-location-server
as the first step regardless of any options provided.
The execution of the script is such that it starts csw-location-server
, then start all of the CSW services, unless one or more of the following options are specified.
--auth
if provided, starts authentication service.--config
if provided, starts configuration service.--event
if provided, starts event service.--alarm
if provided, starts alarm service.--database
if provided, starts database service.
While starting the Database Service, make sure that
- The
PGDATA
environment variable is set to the Postgres data directory where Postgres is installed e.g. for mac: “/usr/local/var/postgres” and - there is a password set for the valid Postgres user. If not, go to the Postgres shell via
psql
and runALTER USER <username> WITH PASSWORD '<mypassword>';
. If there is any problem entering the Postgres shell, go to theconf
folder ->database_service
->pg_hba.conf
and changepassword
totrust
. Try entering the Postgres shell again and set the password. Once set successfully, reverttrust
topassword
inpg_hba.conf
and run the Database Service viacsw-services.sh
.
With this, the component code is now ready to connect to the provided services via csw-services.sh
.
Starting Elastic Logging Aggregator for Development
Elastic stack (Elasticsearch, Logstash, Kibana and Filebeat) is used to aggregate logs generated from TMT applications (Scala/Java/Python/C++/C) and CSW services (mentioned in the previous section). For development purposes, Docker compose is used. Hence, make sure that latest Docker setup is installed and running before starting the Elastic stack. To know more about how Elastic stack works please refer to Logging Aggregator.
For the host setup, follow the below given steps:
- Install Docker version 18.09+
- Install Docker Compose version 1.24.0+
On distributions which have SELinux enabled out-of-the-box, you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly. For example, on Redhat and CentOS, the following will apply the proper context:
$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
To know more about running Docker for Mac please refer to this link. For Windows, ensure that the “Shared Drives” feature is enabled for the C:
drive (Docker for Windows > Settings > Shared Drives). See Configuring Docker for Windows Shared Drives (MSDN Blog).
Assuming that the developer has downloaded csw-apps-<some-version>.zip
from csw releases and unzipped it, there are four folders, as follows, in csw-apps-<some-version>
:
- bin
- lib
- logging_aggregator
- conf
Go to logging_aggreator/dev
and run
docker-compose build --no-cache
docker-compose up --force-recreate
This will start Filebeat, Elasticsearch, Logstash and Kibana in a Docker container. Note that csw-services.sh
will generate all log files under /tmp/csw/logs/
and Filebeat will watch for them there.
Once, the Docker container is up, open an browser and go to http://localhost:5601/
to use Kibana. Go to:
Management
->Kibana
->Index Patterns
and create an index pattern as per the requirement.Discover
->Select the index pattern created
and explore
To use a different Elastic Stack version than the one currently available in the repository, simply change the version in logging_aggreator/dev/.env
file, and rebuild the stack with:
docker-compose build --no-cache
docker-compose up --force-recreate
Always pay attention to the upgrade instructions for each individual component before performing a stack upgrade.