Code development platform for open source projects from the European Union institutions

Skip to content
Snippets Groups Projects
Commit 1df882e4 authored by Natalia Szakiel's avatar Natalia Szakiel
Browse files

Merge branch 'develop' into 'main'

Develop

See merge request !93
parents d00a8faf 7bd80635
No related branches found
No related tags found
1 merge request!93Develop
Pipeline #279043 failed
......@@ -2,11 +2,24 @@
All notable changes to this project will be documented in this file.
## [0.1.11] - 2025-02-12
## [0.1.12] - 2025-03-07
### Fix
- Moved changelog file to correct directory
- Upgrade ELK to 8.16.0
### Changed
- Changed default values for kibana resources
### Added
- Added logs parsing for new containers
- Added end user manual for dashboards
## [0.1.11] - 2025-02-05
### Fixed
- Fixed bug with empty values in logstash and filebeat configmaps
- Fixed empty string in helm env variables. Note: helm variables need to be defined with export in pipeline.variables.sh file. Otherwise they will be replaced by empty string by gitlab pipeline.
## [0.1.10] - 2025-01-20
......
......@@ -5,7 +5,7 @@ image:
# version of all elastic applications
elasticVersion: 8.15.1
elasticVersion: 8.16.0
namespaceTag: "test-namespace"
mainNamespace: observability
......@@ -60,8 +60,8 @@ kibana:
memory: "0"
cpu: "0"
limits:
memory: 1Gi
cpu: 300m
memory: 4Gi
cpu: 2
#Environment variables to set in kibana pod
#Usage from cli:
# --set "kibana.env[0].name=VARIABLE_NAME" --set "kibana.env[0].value=VARIABLE_VALUE"
......@@ -123,12 +123,12 @@ logstash:
filter: |-
filter {
## removing ELK logs
if [kubernetes][container][name] == "filebeat" or [kubernetes][container][name] == "metricbeat" or [kubernetes][container][name] == "logstash" or [kubernetes][container][name] == "heartbeat" or [kubernetes][container][name] == "kibana" or [kubernetes][container][name] == "elasticsearch" {
if [kubernetes][container][name] in ["filebeat", "metricbeat", "logstash", "heartbeat", "kibana", "elasticsearch"] {
drop { }
}
# Technical logs
if [kubernetes][container][name] == "sd-creation-wizard-api" or [kubernetes][container][name] == "signer" or [kubernetes][container][name] == "sd-creation-wizard-api-validation" {
if [kubernetes][container][name] in ["sd-creation-wizard-api", "signer", "sd-creation-wizard-api-validation"] {
json {
source => "message"
skip_on_invalid_json => true
......@@ -137,7 +137,7 @@ logstash:
}
}
# Business logs
if [kubernetes][container][name] == "simpl-cloud-gateway" or [kubernetes][container][name] == "tls-gateway" {
if [kubernetes][container][name] in ["simpl-cloud-gateway", "tls-gateway"] {
json {
source => "message"
skip_on_invalid_json => true
......@@ -179,7 +179,10 @@ logstash:
}
# Onboaring technical logs
if [kubernetes][container][name] == "users-roles" or [kubernetes][container][name] == "identity-provider" or [kubernetes][container][name] == "onboarding" or [kubernetes][container][name] == "security-attributes-provider" or [kubernetes][container][name] == "xsfc-advsearch-be" or [kubernetes][container][name] == "contract-consumption-be-api" {
if [kubernetes][container][name] in ["users-roles", "identity-provider", "onboarding",
"security-attributes-provider", "xsfc-advsearch-be",
"contract-consumption-be-api", "authentication-provider",
"contract-consumer", "contract-dataprovider"] {
json {
source => "message"
skip_on_invalid_json => true
......@@ -220,6 +223,10 @@ logstash:
"log_type" => "technical"
}
}
date {
match => [ "timestamp", "dd MMM yyyy HH:mm:ss.SSS"]
target => "@timestamp"
}
}
if [kubernetes][container][name] == "keycloak" {
grok {
......@@ -407,7 +414,13 @@ filebeat:
- equals:
kubernetes.container.name: "xsfc-advsearch-be"
- equals:
kubernetes.container.name: "contract-consumption-be-api"
kubernetes.container.name: "contract-consumption-be-api"
- equals:
kubernetes.container.name: "authentication-provider"
- equals:
kubernetes.container.name: "contract-consumer"
- equals:
kubernetes.container.name: "contract-dataprovider"
config:
- type: container
paths:
......
This diff is collapsed.
### Log4J wrapper
Repository: https://code.europa.eu/simpl/simpl-open/development/contract-billing/common_logging
1. Import in the project
To import into project add maven dependency:
```
<properties>
<simpl.common.logging.version>1.0.0-SNAPSHOT.39.1a139b97</simpl.common.logging.version>
</properties>
<dependency>
<groupId>eu.simpl</groupId>
<artifactId>SIMPL_COMMON_LOGGING</artifactId>
<version>${simpl.common.logging.version}</version>
</dependency>
and repository
<repositories>
<repository>
<id>gitlab-maven</id>
<url>${CI_API_V4_URL}/projects/897/packages/maven</url>
</repository>
</repositories>
```
2. Normal logging:
LOG.info("message");
```
{
"timestamp": "2024-08-20T06:20:12.201Z",
"level": "INFO",
"message": "Application started",
"thread": "main",
"logger": "eu.simpl.simpl_billing.SimplBillingApplication"
}
```
3. Log Http Message
import static eu.simpl.MessageBuilder.buildMessage;
```
LOG.info(buildMessage(HttpLogMessage.builder()
.msg("HTTP request")
.httpStatus("200")
.httpRequestSize("100")
.httpExecutionTime("100")
.user("user")
.build()));
```
Result:
```
{
"timestamp": "2024-08-19T10:32:54.801Z",
"level": "INFO",
"message":
{
"msg": "HTTP request",
"httpStatus": "200",
"httpRequestSize": "100",
"user": "user",
"httpExecutionTime": "100"
},
"thread": "main",
"logger": "eu.simpl.simpl_billing.SimplBillingApplication"
}
```
4. Custom log level BUSINESS
buildMessage method can not be used with slf4, normal logging yes, can be used with sl4j.
example usage:
```
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import static eu.simpl.MessageBuilder.buildMessage;
private static final Logger LOG = LogManager.getLogger(ClassToBeChanged.class);
LOG.log(Level.getLevel("BUSINESS"), buildMessage(LogMessage.builder()
.origin("origin_name")
.destination("destination_name")
.businessOperations(List.of("operation1", "operation2"))
.messageType(MessageType.RESPONSE)
.correlationId("correlation_id")
.httpStatus("200")
.user("user_name")
.msg("Example log message")
.build()));
```
Result:
```
{
"timestamp": "2024-08-12T12:43:18.437+0200",
"level": "BUSINESS",
"message":
{
"msg": "Network",
"messageType": "RESPONSE",
"businessOperations": "[operation1, operation2]",
"origin": "origin_name",
"httpStatus": "200",
"destination": "destination_name",
"correlationId": "correlation_id",
"user": "user_name"
},
"thread": "main",
"logger": "eu.simple.simpl_billing.SimplBillingApplication",
"httpRequestSize": "null",
"httpExecutionTime": "null"
}
```
\ No newline at end of file
# README: Kibana Dashboards User Manual
## Table of Contents
1. [Overview](#overview)
2. [Accessing the Dashboards](#accessing-the-dashboards)
3. [Using the Dashboards](#using-the-dashboards)
4. [Dashboard - ELK infrastructure syslog](#dashboard---elk-infrastructure-syslog)
5. [Dashboard - Kubernetes Cluster Nodes Overview](#dashboard---kubernetes-cluster-nodes-overview)
6. [Dashboard - Agents Usage Monitoring](#dashboard---agents-usage-monitoring)
7. [Dashboard - Heartbeats Monitoring](#dashboard---heartbeats-monitoring)
8. [Dashboard - Technical Monitoring](#dashboard---technical-monitoring)
9. [Dashboard - Business Monitoring](#dashboard---business-monitoring)
10. [Additional Resources](#additional-resources)
## Overview
This document provides an end-user guide for accessing and utilizing Kibana dashboards.
There are 6 dashboards configured:
- Dashboard - ELK infrastructure syslog
- Kubernetes Cluster Nodes Overview
- Agents Usage Monitoring
- Dashboard For Heartbeats
- Business Monitoring Dashboard
- Technical Monitoring Dashboard
![EndUserDashoards](images/End_user_dashboards_001.png)
## Accessing the Dashboards
Once imported, you can access the dashboards as follows:
1. Open Kibana.
2. Navigate to **Dashboard** from the left menu.
3. Locate the desired dashboard from the list.
4. Click on the dashboard name to open it.
## Using the Dashboards
- **Filtering Data:** Use the filter bar at the top to refine results based on time ranges, namespaces, or specific labels.
- **Drill-down Analysis:** Click on visualizations to explore detailed data.
## Dashboard - ELK infrastructure syslog
This dashboard provides an overview of log levels across ELK components (Elasticsearch, Kibana, and Logstash). It visualizes system logs to help monitor errors, warnings, and informational messages, aiding in troubleshooting and performance analysis.
User can filter for event.dataset (e.x. elasticsearch.gc, deprecetions etc) and event module (e.x. kibana, elasticsearch, logstash).
![EndUserDashoards](images/End_user_dashboards_002.png)
## Dashboard - Kubernetes Cluster Nodes Overview
This dashboard provides insights into the performance and resource usage of Kubernetes cluster nodes. It includes metrics on CPU and memory consumption, network traffic, and energy efficiency per node, helping to monitor system health, optimize resource allocation, and improve overall cluster efficiency.
User can filter for particular nodes (agent.hostname filter) and also for single processes like java, metricbear, postgres etc.
![EndUserDashoards](images/End_user_dashboards_003.png)
## Dashboard - Agents Usage Monitoring
This dashboard tracks resource consumption across deployed agents, providing visibility into CPU and memory usage per pod. It also monitors storage volumes utilized by each pod, helping to optimize resource allocation, detect anomalies, and ensure efficient operation within the environment.
User can filter for pods
![EndUserDashoards](images/End_user_dashboards_004.png)
## Dashboard - Heartbeats Monitoring
This dashboard provides real-time visibility into service availability and health by displaying heartbeat data and statuses. It includes detailed heartbeat logs, helping to track service uptime, detect failures, and troubleshoot issues efficiently.
User can filter for monitors, status and url.
![EndUserDashoards](images/End_user_dashboards_005.png)
## Dashboard - Technical Monitoring
This dashboard aggregates technical logs from all pods and namespaces, providing a centralized view of system activity. Logs are visualized using histograms, categorized by log level (e.g., INFO, WARN, ERROR), enabling efficient troubleshooting, anomaly detection, and performance monitoring.
User can filter for log level and container name.
![EndUserDashoards](images/End_user_dashboards_006.png)
## Dashboard - Business Monitoring
This dashboard provides insights into business processes by visualizing business logs. It tracks key metrics such as business destinations, operations, and message types, helping to analyze workflows, detect anomalies, and optimize operational efficiency.
User can filter for operation, destination, origin and message type
## Additional Resources
- Kibana Documentation: [https://www.elastic.co/guide/en/kibana/current/index.html](https://www.elastic.co/guide/en/kibana/current/index.html)
- Elasticsearch Documentation: [https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)
documents/images/End_user_dashboards_001.png

92.2 KiB

documents/images/End_user_dashboards_002.png

120 KiB

documents/images/End_user_dashboards_003.png

136 KiB

documents/images/End_user_dashboards_004.png

145 KiB

documents/images/End_user_dashboards_005.png

141 KiB

documents/images/End_user_dashboards_006.png

209 KiB

PROJECT_VERSION_NUMBER="0.1.11"
PROJECT_VERSION_NUMBER="0.1.12"
export LOGSTASH_HOSTS="\${LOGSTASH_HOSTS}"
export ELASTIC_ELASTICSEARCH_ES_HOSTS="\${ELASTIC_ELASTICSEARCH_ES_HOSTS}"
export LOGSTASH_USER="\${LOGSTASH_USER}"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment