Configuring the Elasticsearch Log Plug-In to Write Container Logs to Elasticsearch

Learn how to install and use the Akana Elasticsearch Log Plug-In to write log files to the Elasticsearch server.

Table of Contents

Overview

The Akana Elasticsearch Log Plug-In can be installed on each of the Akana containers. This plug-in allows the capture of Akana container exception data and access log information into Elasticsearch optionally.

You can use Kibana to view the data pushed into Elasticsearch, or query it directly. The data is returned as a JSON document.

There is a com.soa.log configuration property that you can use to configure the Elasticsearch Log Plug-In to write data to Elasticsearch rather than the default location. The com.soa.log allows you to configure the platform to write the container logs to the Elasticsearch server.

Installation

The Akana Elasticsearch Log Plug-In is a part of the Akana Option packs and is downloaded from the Support Download site. It can be installed in any of the Akana containers.

To download: Go to the Rogue Wave Support Center (https://library.roguewave.com). Click Product Downloads > Akana - Product Downloads > choose version. Option packs are in the bottom section of the page. Download the version number that matches your installation.

Unzip the file, and then copy everything contained inside the /lib/ folder in the ZIP file (folders and files), to the /lib/ folder of your installation. For example: \lib\optionpacks\2019.1.3.

You can then install the feature using the Akana Administration Console.

Any container where the Akana Elasticsearch Log Plug-In is installed must be able to access the Elasticsearch server.

Configuring the Elasticsearch Log Plug-In to write container logs to Elasticsearch (com.soa.log)

After installing the plug-in, you can configure the container so that instead of writing the server logs to the disk, it writes them to the Elasticsearch server.

In the Akana Administration Console, on the Configuration tab, under Configuration Categories, select com.soa.log. The configuration PID is: com.soa.log

Note: By default, rootLogger.appenderRef.root.ref property is set to rolling. However, if you specify rootLogger.appenderRef.root.ref property to rolling and rootLogger.appenderRef.elastic.ref property to elastic, the logs are written to both log file and Elasticsearch server.

Properties that are required are shown in the table below.

Property Default Value
appender.elastic.batchDelivery.batchSize ${ES_LOG_BATCH_SIZE|2048}
appender.elastic.batchDelivery.deliveryInterval ${ES_LOG_BATCH_INTERVAL|3000}
appender.elastic.batchDelivery.indexTemplate.name ${ES_LOG_INDEX_NAME|akana}
appender.elastic.batchDelivery.indexTemplate.sourceString ${ES_LOG_INDEX_TEMPLATE|{ "index_patterns": [ "akana*" ], "mappings": { "properties": { "@timestamp": { "type": "date", "format": "date_time" }, "log.logger": { "type": "keyword", "index": false }, "message": { "type": "keyword", "index": false }, "process.thread.name": { "type": "keyword", "index": false }, "log.level": { "type": "keyword", "index": false }, "host.name": { "type": "keyword", "index": false }, "container.key": { "type": "keyword", "index": false }, "container.type": { "type": "keyword", "index": false }, "error.type": { "type": "keyword", "index": false }, "error.message": { "type": "keyword", "index": false }, "error.stack_trace": { "type": "keyword", "index": false } } } }
appender.elastic.batchDelivery.indexTemplate.type IndexTemplate
appender.elastic.batchDelivery.objectFactory.auth.credentials.password ${ES_LOG_SERVER_PASSWORD|''}
appender.elastic.batchDelivery.objectFactory.auth.credentials.type BasicCredentials
appender.elastic.batchDelivery.objectFactory.auth.credentials.username ${ES_LOG_SERVER_USERNAME|''}
appender.elastic.batchDelivery.objectFactory.auth.type XPackAuth
appender.elastic.batchDelivery.objectFactory.connTimeout ${ES_LOG_SERVER_CONNECT_TIMEOUT|500}
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.initialPoolSize 2
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.itemSizeInBytes 5120000
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.monitored false
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.poolName logItemPool
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.resizeTimeout 100
appender.elastic.batchDelivery.objectFactory.itemSourceFactory.type PooledItemSourceFactory
appender.elastic.batchDelivery.objectFactory.mappingType ${ES_LOG_SERVER_MAPPING_TYPE|_doc}
appender.elastic.batchDelivery.objectFactory.maxTotalConnections ${ES_LOG_SERVER_MAX_CONNECTIONS|8}
appender.elastic.batchDelivery.objectFactory.readTimeout ${ES_LOG_SERVER_READ_TIMEOUT|20000}
appender.elastic.batchDelivery.objectFactory.serverUris ${ES_LOG_SERVER_URLS|http://localhost:9200}
appender.elastic.batchDelivery.objectFactory.type HCHttp
appender.elastic.batchDelivery.type AsyncBatchDelivery
appender.elastic.indexNameFormatter.indexName ${ES_LOG_INDEX_NAME|akana}
appender.elastic.indexNameFormatter.pattern ${ES_LOG_INDEX_NAME_PATTERN|yyyy-MM-dd}
appender.elastic.indexNameFormatter.type RollingIndexName
appender.elastic.layout.container.key container.type
appender.elastic.layout.container.type EventTemplateAdditionalField
appender.elastic.layout.container.value $${container.type}
appender.elastic.layout.eventTemplate "${ES_LOG_EVENT_TEMPLATE|{ "@timestamp": { "$resolver": "timestamp", "pattern": { "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'", "timeZone": "UTC" } }, "ecs.version": "1.2.0", "log.level": { "$resolver": "level", "field": "name" }, "message": { "$resolver": "message", "stringified": true }, "process.thread.name": { "$resolver": "thread", "field": "name" }, "log.logger": { "$resolver": "logger", "field": "name" }, "error.type": { "$resolver": "exception", "field": "className" }, "error.message": { "$resolver": "exception", "field": "message" }, "error.stack_trace": { "$resolver": "exception", "field": "stackTrace", "stackTrace": { "stringified": true } } } ",
appender.elastic.layout.hostname.key hostname
appender.elastic.layout.hostname.type EventTemplateAdditionalField
appender.elastic.layout.hostname.value $${container.hostname}
appender.elastic.layout.itemSourceFactory.initialPoolSize 10000
appender.elastic.layout.itemSourceFactory.itemSizeInBytes 512
appender.elastic.layout.itemSourceFactory.monitored false
appender.elastic.layout.itemSourceFactory.poolName batchItemPool
appender.elastic.layout.itemSourceFactory.resizeTimeout 100
appender.elastic.layout.itemSourceFactory.type PooledItemSourceFactory
appender.elastic.layout.key.key container.key
appender.elastic.layout.key.type EventTemplateAdditionalField
appender.elastic.layout.key.value $${container.key}
appender.elastic.layout.maxStringLength 999999
appender.elastic.layout.stackTraceElementTemplate "${ES_LOG_STACK_TEMPLATE|{ "class": { "$resolver": "stackTraceElement", "field": "className" }, "method": { "$resolver": "stackTraceElement", "field": "methodName" }, "file": { "$resolver": "stackTraceElement", "field": "fileName" }, "line": { "$resolver": "stackTraceElement", "field": "lineNumber" } } ",
appender.elastic.layout.type AkanaTemplateLayout
appender.elastic.name elastic
appender.elastic.type Elasticsearch

Note: You can increase the appender.elastic.layout.maxStringLength property default value to handle large error messages in cases where the log messages are getting truncated. However, increasing the default value may cause higher memory usage increasing the JVM memory of the containers.

Logged Data: container logs

This plug-in pushes the exceptions from the access log file into Elasticsearch. An example message is shown below. This 404 exception is logged in Elasticsearch. For additional information regarding this error (for example, what may have caused the error, exception) the container log files residing on the file system are still required because this information is not pushed into Elasticsearch.

{
  "_index": "request-log",
  "_type": "_doc",
  "_id": "ecd1436d-6552-431b-a79e-07082646bac7",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2020-02-24T23:55:35.524Z",
    "hostName": "ap-ex-swest",
    "applicationName": "AP",
    "containerKey": "1000105",
    "instanceName": "ap-ex-swest",
    "logger": "com.soa.transport.jetty.JettyTransportBinding",
    "level": "ERROR",
    "className": "com.soa.transport.http.HttpException",
    "message": "HTTP Error [404:Not Found] when accessing the URI [Not specified]",
    "stackTrace": "com.soa.transport.http.HttpException: HTTP Error [404:Not Found] when. . . . [abbreviated for display purposes] \n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n\n\t",
    "request": null,
    "tenant": "none",
    "eventId": "264c24d7-5761-11ea-9044-b5a9b2cef593",
    "alertCode": 9022
  },
  "fields": {
    "@timestamp": [
      "2020-02-24T23:55:35.524Z"
    ]
  },
  "sort": [
    1582588535524
  ]
}

Security configuration for Elasticsearch Log Plug-In error log settings

To configure error log settings, log in to the Akana Administration Console and go to Configuration > com.soa.log.

The properties that are required are shown in the following table.

Property Description
appender.elastic.batchDelivery.objectFactory.auth.certInfo.keystorePassword The password for the keystore.
appender.elastic.batchDelivery.objectFactory.auth.certInfo.keystorePath The keystore path that contains the key and certificates.
appender.elastic.batchDelivery.objectFactory.auth.certInfo.truststorePassword The password for the truststore.
appender.elastic.batchDelivery.objectFactory.auth.certInfo.truststorePath The truststore path that contains the key and certificates.
appender.elastic.batchDelivery.objectFactory.auth.certInfo.type The certificate type. Only PKCS12 and JKS keystore formats are supported.
appender.elastic.batchDelivery.objectFactory.auth.credentials.password The password for the indicated username.
appender.elastic.batchDelivery.objectFactory.auth.credentials.type The basic credentials to authenticate a user.
appender.elastic.batchDelivery.objectFactory.auth.credentials.username The username for connecting to Elasticsearch.

If prompted, restart the container for the configuration to take effect.

Note: The keystore password and key password must be the same.