Code development platform for open source projects from the European Union institutions

Skip to content
Snippets Groups Projects
Commit 04c3df2d authored by Natalia Szakiel's avatar Natalia Szakiel
Browse files

added adc config and Alberts read me

parent 69d2e5d6
No related branches found
No related tags found
2 merge requests!82Develop,!81Feature/ecd config added
Pipeline #256625 passed
......@@ -6,6 +6,22 @@ Requriments:
https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html#_state_and_event
Link to manual:
https://confluence.simplprogramme.eu/display/SIMPL/8650+-+Monitoring+documentation
============ Export dashboards to git repository ============
Exporting dashboards
- Login to Kibana
- Go to Stack Management → Saved Object
- Select Type: Dashboards
- Choose dashboards to export.
- Press "Export" to download ndjson file with dashboards to local PC.
- Rename downloaded file to dashboards.ndjson and upload to Git repository to path: eck-monitoring/charts/kibana/dashboards/
============ Import dashboards manually ============
Loading dashboards
- download dashboards.ndjson file from repo directory eck-monitoring/kibana/dashdoards/
- login to Kibana
......@@ -13,93 +29,243 @@ Loading dashboards
- click import and choose downloaded file
- next press import
============ FileBeat agent deployment and configuration ============
Link to manual:
https://confluence.simplprogramme.eu/display/SIMPL/8650+-+Monitoring+documentation
Description:
Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that user specifies, collects log events, and forwards them either to Elasticsearch or Logstash for indexing. Filebeat consists of two main components: inputs and harvesters. These components work together to tail files and send event data to the output that you specify.
Harvester
A harvester is responsible for reading the content of a single file. The harvester reads each file, line by line, and sends the content to the output. One harvester is started for each file. The harvester is responsible for opening and closing the file, which means that the file descriptor remains open while the harvester is running. If a file is removed or renamed while it’s being harvested, Filebeat continues to read the file. This has the side effect that the space on your disk is reserved until the harvester closes. By default, Filebeat keeps the file open until close_inactive value is reached.
==============================
Performance parameters
==============================
Closing a harvester has the following consequences:
- The file handler is closed, freeing up the underlying resources if the file was deleted while the harvester was still reading the file.
- The harvesting of the file will only be started again after scan_frequency has elapsed.
- If the file is moved or removed while the harvester is closed, harvesting of the file will not continue.
Logstash performance parameters
To control when a harvester is closed, use the close_* configuration options.
In file values/dev/observability/values.yml are following logstash performance parameters:
Input
An input is responsible for managing the harvesters and finding all sources to read from.
- logstash.env.ls_java_opts: "-Xms3g -Xmx3g"
Set heap memory for logstash process inside container.
Logstash statefulsets restart is required.
We use type log - the input finds all files on the drive that match the defined glob paths and starts a harvester for each file. Each input runs in its own Go routine.
- logstash.resources.requests.memory: 4Gi
logstash.resources.limits.memory: 4Gi
Set memory allocation (request/limit) for logstash pod.
Implementation
Overview
- logstash.resources.requests.cpu: 300m
logstash.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for logstash pod.
This configuration deploys Filebeat on a Kubernetes cluster as a DaemonSet to collect and ship logs to Elasticsearch and Logstash. The setup includes custom scripts for log generation, secure connections with certificates, and monitoring.
- pipelines_yml_config.pipeline.workers: 1
Set number of workers for logstash pipeline.
Logstash statefulsets restart is required.
Components
Beat Configuration (filebeat.yaml):
- Kind: Beat
- Image: Uses Filebeat Docker image defined in values.yaml.
- Type: filebeat
- Elasticsearch Reference: Connects to the Elasticsearch instance named {{ .Release.Name }}-elasticsearch.
- pipelines_yml_config.pipeline.pipeline.batch.size: 125
Set batch_size for logstash pipeline.
Logstash statefulsets restart is required.
Pod Template:
Security Context: Runs as root user with file system group set to 1000.
Container Specs:
- Executes a custom script example.sh and starts Filebeat.
- Mounts various volumes for configuration, scripts, and certificates.
Environment Variables:
- Sets Elasticsearch and Logstash hosts, and monitoring credentials from secrets.
Elasticsearch performance parameters
Volumes:
Config Volume: Loads filebeat.yml from a secret.
Example Script Volume: Loads example.sh from a ConfigMap.
Certificates Volumes: Loads necessary TLS certificates from secrets.
- elasticsearch.diskSpace: 60Gi
Set disk size to store indices in elasticsearch pods
RBAC Configuration:
ServiceAccount: filebeat is created in the release namespace.
RoleBinding: Grants access to the Issuer for managing certificates.
- elasticsearch.count: 3
Number of elasticsearch pods in stack
Certificate Management:
Uses cert-manager to create TLS certificates for secure communication.
- elasticsearch.resources.requests.memory: 4Gi
elasticsearch.resources.limits.memory: 4Gi
Set memory allocation (request/limit) for elasticsearch pod.
Supporting Resources
- elasticsearch.resources.requests.cpu: 300m
elasticsearch.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for elasticsearch pod.
Secret (filebeat-config): Contains the Filebeat configuration file filebeat.yml encoded in Base64.
ConfigMap (filebeat-example-script): Provides example.sh for generating sample logs.
ServiceAccount: Ensures Filebeat has the necessary permissions to access Kubernetes resources.
RoleBinding: Links the service account with a role for accessing the Issuer.
values.yaml
Image Details: Defines the Filebeat image and tag.
Log Generation:
- totalMessages: Number of log messages to generate (can be infinite).
- messagesPerMinute: Rate of log generation.
TLS Configuration:
- Sets certificate duration and renewal parameters.
Filebeat Inputs and Outputs:
- Input: Configures Filebeat to read logs from example.log using multiline patterns.
- Output: Sends logs to Logstash with TLS enabled and configures Elasticsearch for monitoring.
Kibana performance parameters
Security
Uses secrets for sensitive data like TLS certificates and monitoring credentials.
Configures secure communication with Elasticsearch and Logstash using SSL/TLS.
- kibana.resources.requests.memory: 1Gi
kibana.resources.limits.memory: 1Gi
Set memory allocation (request/limit) for kibana pod.
- kibana.resources.requests.cpu: 300m
kibana.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for kibana pod.
- kibana.count: 1
Number of kibana pods in stack
============ Infrastructure metrics ============
Filebeat performance parameters
Use of resources: CPU, RAM, Disk:
- filebeat4agents.resources.requests.memory: 1Gi
filebeat4agents.resources.limits.memory: 1Gi
Set memory allocation (request/limit) for filebeat pod.
Performance parameters like CPU, RAM, Disk will be collected by metricbeat.
All kubernetes components: nodes, pods will be monitor.
Elastic cluster will be monitored and stats can be available in Kibana "Stack Monitoring" section.
- filebeat4agents.resources.requests.cpu: 100m
filebeat4agents.resources.limits.cpu: 100m
Set CPU allocation (request/limit) for filebeat pod.
Throughput:
Throughput will be collected by metricbeat.
Following types of throughputs are available:
- network
- disk
Response time and Availability
This parameter will be provided by heartbeat.
Available monitoring types:
- tcp
- icmp
- http
============ Log4J wrapper ============
Repository: https://code.europa.eu/simpl/simpl-open/development/contract-billing/common_logging
1. Import in the project
To import into project add maven dependency:
==============================
ILM
==============================
<properties>
<simpl.common.logging.version>1.0.0-SNAPSHOT.39.1a139b97</simpl.common.logging.version>
</properties>
<dependency>
<groupId>eu.simpl</groupId>
<artifactId>SIMPL_COMMON_LOGGING</artifactId>
<version>${simpl.common.logging.version}</version>
</dependency>
and repository
<repositories>
<repository>
<id>gitlab-maven</id>
<url>${CI_API_V4_URL}/projects/897/packages/maven</url>
</repository>
</repositories>
2. Normal logging:
LOG.info("message");
{
"timestamp": "2024-08-20T06:20:12.201Z",
"level": "INFO",
"message": "Application started",
"thread": "main",
"logger": "eu.simpl.simpl_billing.SimplBillingApplication"
}
3. Log Http Message
import static eu.simpl.MessageBuilder.buildMessage;
LOG.info(buildMessage(HttpLogMessage.builder()
.msg("HTTP request")
.httpStatus("200")
.httpRequestSize("100")
.httpExecutionTime("100")
.user("user")
.build()));
Result:
{
"timestamp": "2024-08-19T10:32:54.801Z",
"level": "INFO",
"message":
{
"msg": "HTTP request",
"httpStatus": "200",
"httpRequestSize": "100",
"user": "user",
"httpExecutionTime": "100"
},
"thread": "main",
"logger": "eu.simpl.simpl_billing.SimplBillingApplication"
}
4. Custom log level BUSINESS
buildMessage method can not be used with slf4, normal logging yes, can be used with sl4j.
example usage:
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import static eu.simpl.MessageBuilder.buildMessage;
private static final Logger LOG = LogManager.getLogger(ClassToBeChanged.class);
LOG.log(Level.getLevel("BUSINESS"), buildMessage(LogMessage.builder()
.origin("origin_name")
.destination("destination_name")
.businessOperations(List.of("operation1", "operation2"))
.messageType(MessageType.RESPONSE)
.correlationId("correlation_id")
.httpStatus("200")
.user("user_name")
.msg("Example log message")
.build()));
Result:
{
"timestamp": "2024-08-12T12:43:18.437+0200",
"level": "BUSINESS",
"message":
{
"msg": "Network",
"messageType": "RESPONSE",
"businessOperations": "[operation1, operation2]",
"origin": "origin_name",
"httpStatus": "200",
"destination": "destination_name",
"correlationId": "correlation_id",
"user": "user_name"
},
"thread": "main",
"logger": "eu.simple.simpl_billing.SimplBillingApplication",
"httpRequestSize": "null",
"httpExecutionTime": "null"
}
============ Modify ILM policy ============
On each monitoring ELK stack are 5 ILM polices:
bussines-ilm - reponsible for index rotation with bussines logs
technical-ilm - reponsible for index rotation with technical logs
metricbeat-ilm - reponsible for index rotation with metrics collected from agents
filebeat - reponsible for index rotation with ELK stack logs
heartbeat-ilm - reponsible for index rotation with services heartbeats
On each monitoring ELK stack are 5 ILM politics:
- bussines-ilm - reponsible for index rotation with bussines logs
- technical-ilm - reponsible for index rotation with technical logs
- metricbeat-ilm - reponsible for index rotation with metrics collected from agents
- filebeat - reponsible for index rotation with ELK stack logs
- heartbeat-ilm - reponsible for index rotation with services heartbeats
Apply changes on heartbeat-ilm:
1) Modify values.yml file and set new values:
Modify values/dev/observability/values.yml file and set new values:
....
heartbeat:
ilm:
......@@ -111,13 +277,17 @@ Apply changes on heartbeat-ilm:
services:
heartbeat.monitors:
....
2) Restart heartbeat by command:
Restart heartbeat by command:
kubectl rollout restart deployment heartbeat-beat-heartbeat
Apply changes on metricbeat-ilm:
1) Modify values.yml file and set new values:
Modify values/dev/observability/values.yml file and set new values:
....
metricbeat:
ilm:
......@@ -131,19 +301,26 @@ Apply changes on metricbeat-ilm:
memory: 500Mi
limits:
....
2) Restart metricbeat by command:
Restart metricbeat by command:
kubectl rollout restart daemonset metricbeat-beat-metricbeat
Apply changes on filebeat:
1) Login to Kibana and go to: Stack Management -> Index Lifecycle Policies.
2) Click on filebeat policy
3) Modify "Hot phase" in advanced setting when disable "Use recommneded defaults" and/or modify Delete phase if needed.
4) Press "Save policy".
Login to Kibana and go to: Stack Management -> Index Lifecycle Policies.
Click on filebeat policy
Modify "Hot phase" in advanced setting when disable "Use recommneded defaults" and/or modify Delete phase if needed.
Press "Save policy".
Apply changes on business-ilm and technical-ilm:
1) Modify values.yml file and set new values:
Modify values/dev/observability/values.yml file and set new values:
....
logstash:
ilm:
......@@ -162,14 +339,79 @@ Apply changes on business-ilm and technical-ilm:
count_beats: 1
count_syslog: 0
....
2) Restart logstash statefulsets by command:
Restart logstash statefulsets by command:
kubectl rollout restart sts logstash-beats-ls
==============================
Monitoring API
==============================
============ Performance parameters ============
Logstash performance parameters
In file values/dev/observability/values.yml are following logstash performance parameters:
- logstash.env.ls_java_opts: "-Xms3g -Xmx3g"
Set heap memory for logstash process inside container.
Logstash statefulsets restart is required.
- logstash.resources.requests.memory: 4Gi
logstash.resources.limits.memory: 4Gi
Set memory allocation (request/limit) for logstash pod.
- logstash.resources.requests.cpu: 300m
logstash.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for logstash pod.
- pipelines_yml_config.pipeline.workers: 1
Set number of workers for logstash pipeline.
Logstash statefulsets restart is required.
- pipelines_yml_config.pipeline.pipeline.batch.size: 125
Set batch_size for logstash pipeline.
Logstash statefulsets restart is required.
Elasticsearch performance parameters
- elasticsearch.diskSpace: 60Gi
Set disk size to store indices in elasticsearch pods
- elasticsearch.count: 3
Number of elasticsearch pods in stack
- elasticsearch.resources.requests.memory: 4Gi
elasticsearch.resources.limits.memory: 4Gi
Set memory allocation (request/limit) for elasticsearch pod.
- elasticsearch.resources.requests.cpu: 300m
elasticsearch.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for elasticsearch pod.
Kibana performance parameters
- kibana.resources.requests.memory: 1Gi
kibana.resources.limits.memory: 1Gi
Set memory allocation (request/limit) for kibana pod.
- kibana.resources.requests.cpu: 300m
kibana.resources.limits.cpu: 300m
Set CPU allocation (request/limit) for kibana pod.
- kibana.count: 1
Number of kibana pods in stack
Filebeat performance parameters
- filebeat4agents.resources.requests.memory: 1Gi
filebeat4agents.resources.limits.memory: 1Gi
Set memory allocation (request/limit) for filebeat pod.
- filebeat4agents.resources.requests.cpu: 100m
filebeat4agents.resources.limits.cpu: 100m
Set CPU allocation (request/limit) for filebeat pod.
============ Monitoring API ============
Kibana
......
PROJECT_VERSION_NUMBER="0.1.8"
PROJECT_VERSION_NUMBER="0.1.9"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment