Nick Flewitt, Author at FusionReactor Observability & APM https://fusion-reactor.com/author/nickflewitt/ Fri, 19 Jan 2024 09:10:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://fusion-reactor.com/wp-content/uploads/2024/03/cropped-icon-32x32.png Nick Flewitt, Author at FusionReactor Observability & APM https://fusion-reactor.com/author/nickflewitt/ 32 32 FusionReactor: Your trusted, best value, award-winning system-wide monitoring solution https://fusion-reactor.com/blog/evangelism/fusionreactor-your-trusted-best-value-award-winning-system-wide-monitoring-solution/ Fri, 19 Jan 2024 09:10:37 +0000 https://fusionreactor.dev.onpressidium.com/?p=75878 Monitoring platform that exceeds our clients’ expectations FusionReactor stands out for exceptional performance and unparalleled value in the dynamic digital technology landscape. We are excited to announce that, once again, in Winter 2024, FusionReactor has been honored with multiple prestigious … Read More

The post FusionReactor: Your trusted, best value, award-winning system-wide monitoring solution appeared first on FusionReactor Observability & APM.

]]>

Monitoring platform that exceeds our clients’ expectations

FusionReactor stands out for exceptional performance and unparalleled value in the dynamic digital technology landscape. We are excited to announce that, once again, in Winter 2024, FusionReactor has been honored with multiple prestigious accolades from G2.com. These awards include “High Performer,” “Best Support,” “Users Most Likely to Recommend,” and “Fastest Implementation.” This recognition is a testament to our dedication to providing a top-tier system-wide monitoring platform that exceeds our clients’ expectations. The awards once again demonstrate that FusionReactor continues to be your trusted, best-value, award-winning system-wide monitoring solution.

High Performer: Elevating standards

Achieving the “High Performer” accolade reaffirms our commitment at FusionReactor to elevate the standards of system-wide monitoring continually. Our integration with GenAI and OpenTelemetry is pivotal, offering comprehensive insights and analytics that empower businesses to fine-tune their systems for optimal efficiency and dependability.

Unparalleled Support: Your success is our priority

Our consistent recognition for “Best Support” reflects our deep commitment to customer service excellence. We recognize that every inquiry is crucial and represents a professional seeking solutions. Our support team is dedicated to providing responsive, insightful, and effective assistance, ensuring your success and supporting you at every step.

Endorsed by Users: The community’s preferred choice

The “Users Most Likely to Recommend” award is particularly significant as it indicates our user community’s trust and confidence in our platform. This endorsement motivates us to innovate and enhance FusionReactor continuously. Our platform is more than just a monitoring tool; it’s an integral partner in ensuring the growth and stability of your business.

Fastest Implementation: Quick setup, immediate impact

In today’s fast-moving business environment, efficiency is critical. FusionReactor’s recognition for “Fastest Implementation” highlights our commitment to providing a system-wide monitoring solution that is quick and easy to set up. Our streamlined implementation process ensures you can swiftly transition from installation to gaining actionable insights, keeping your business agile and proactive. With FusionReactor, experience minimal downtime and start focusing on what matters most – your business’s performance and growth.

Maximizing ROI: cost-effective monitoring excellence

FusionReactor delivers the best Return on Investment (ROI) in system-wide monitoring. Our platform is designed to maximize your resources efficiently, offering comprehensive monitoring capabilities at a competitive price. With FusionReactor, you’re not just investing in a monitoring tool; you’re investing in a solution that reduces downtime, optimizes system performance, and enhances productivity. Our clients see tangible benefits in reduced operational costs and improved system efficiency, translating into a higher ROI. FusionReactor ensures that every dollar spent goes further, making it an investment that pays for itself in the value it delivers.

Experience the FusionReactor advantage

Experience FusionReactor’s effectiveness firsthand. Begin your free trial today and discover why we are the preferred system-wide monitoring solution for businesses worldwide. Explore our platform’s unique features and understand why our users consistently recommend us.

Why FusionReactor stands out

FusionReactor is not just any monitoring platform; it’s an advanced system-wide monitoring solution uniquely integrated with GenAI and OpenTelemetry. This synergy offers an extensive suite of monitoring, tracing, and diagnostic tools, going beyond traditional monitoring to provide real-time, detailed insights into your system’s performance and health. Whether for troubleshooting or proactive optimization, FusionReactor offers the clarity and control necessary to maintain peak system performance.

Choosing FusionReactor: Elevate your monitoring strategy

FusionReactor is designed to be user-friendly, allowing quick and insightful access without a complex learning curve. It’s the ideal tool for developers, IT professionals, and business leaders aiming to enhance system reliability, reduce downtime, and make informed decisions based on robust data insights.

Begin your FusionReactor experience today

Don’t just read about it – experience the power of FusionReactor’s system-wide monitoring capabilities. Start your free trial now and see how our platform can transform your approach to monitoring and optimizing your systems. With our award-winning support and intuitive interface, you’ll quickly understand why FusionReactor is the go-to solution for businesses seeking a comprehensive and insightful system-wide monitoring platform.

The post FusionReactor: Your trusted, best value, award-winning system-wide monitoring solution appeared first on FusionReactor Observability & APM.

]]>
Parse Variable Patterns using Regex https://fusion-reactor.com/blog/evangelism/parse-variable-patterns-using-regex/ Mon, 14 Mar 2022 09:02:38 +0000 https://fusionreactor.dev.onpressidium.com/?p=67726 What is parsing? Parsing is a very important process when handling logs because it lets users filter logs in useful ways. Unstructured logs can easily be shared into attribute (key/value) pairs, which helps to create improved alerts and charts. There … Read More

The post Parse Variable Patterns using Regex appeared first on FusionReactor Observability & APM.

]]>

What is parsing?

Parsing is a very important process when handling logs because it lets users filter logs in useful ways. Unstructured logs can easily be shared into attribute (key/value) pairs, which helps to create improved alerts and charts.

There is a rise in variable patterns from a lot of complex log lines. However, using the Parse Regex operator will allow users familiar with the regular expression syntax to filter and extract data such as nested fields.

In this article, we are going to discuss the basics of regex, and how to parse variable patterns using Regex.

Basics of Regex

Regex, which stands for regular expression, is a mathematical concept that can be applied to a variety of scientific expressions, especially programming and log management.  Most importantly, it is a very important concept when parsing variable patterns within log lines.

Regular expressions are patterns composed of character combinations in strings. An example of a simple characters and combination of simple and special characters  are /abc/ and /ab*c/ or /Chapter (\d+)\.\d*/ respectively. The example for simple and special characters includes parenthesis, which can serve as a memory device. Hence, the match made with this pattern will be remembered for future use. 

The Parse Regex operator, which is also called the extract operator, allows users who are familiar with regex syntax to easily filter and extract complex data from logs, such as extracting valuable data from nested fields. So the extracted fields can be described by starting and ending with alphanumeric characters and underscore such as (“_”).

| parse regex "<start_expression>(?<field_name><field_expression>)<stop_expression>"
| parse regex "<start_expression>(?<field_name><field_expression>)<stop_expression>" [nodrop]
| parse regex [field=<field_name>] "<start_expression>(?<field_name><field_expression>)<stop_expression>"

Another term to use is “extract”

| extract "<start_expression>(?<field_name><field_expression>)<stop_expression>"

Options

field=<field_name>

The field=fieldname option provides users the ability to specify a field to parse, which is different from the default message.

nodrop

The nodrop option instructs extracted log results to also include messages that don’t match any segment of the parsed term.

multi

Multiple values carried within a single log message can be parsed with the multi option.

Rules of Regex

To properly parse variable patterns using Regex, certain rules must be followed to ensure the process is correct.

  • All regular expressions must be enclosed within quotes, and also a valid JAVA or RE2 expression.
  • Only case-sensitive characters can be matched. Hence, no variables are assigned when text segments cannot be matched.
  • Fields should be specified to prevent the usage of the entire incoming message/log.
  • Multiple parse expression is possible. However, they will be processed according to how they are specified. Meaning that matching strings begins at the start of the first expression.
  • It is possible to express multiple parses with shorthand writing techniques. The expressions will be written with shorthand using comma-separated terms.
  • Nested named capture groups are not supported.
  • Only regular expressions that carry a minimum of one named capturing group are supported by the parse regex operator. So, regular expressions without a capturing group or unnamed capturing group can not be parsed.

For some reason, users might want to convert their regular expressions into a named capturing group to achieve other goals. This conversion can be done using the following steps:
Enclose every character in parenthesis, and append “?”; a capturing group should follow the “?” and the name enclosed within “<>”.

Let’s look at this example below:

Normal Regex Regex with named capturing group
\d{3}-[\w]* (?<regex>\d{3}-[\w]*)

Remember our rules; if your regex carries at least one capturing group, which part of it is enclosed within parentheses, then you can carry out these two options:

 

You can convert it into a non-capturing group. In this scenario, the regex part will not be extracted into the FusionReactor field. Conversion is easily done by appending “?” to the group right after the parentheses.

Normal Regex Regex with non-capturing group
(abc|\d{3}) (?:abc|\d{3})

 

A numbered capturing group can be converted to a named capturing group within your regex string. This conversion can be done by appending a “?” and enclosing the name of the capturing group within “<>”. Usually, FusionReactor will generate a field with the same name that is specified in the named capturing group. 

Normal Regex Regex with named capturing group
(abc|\d{3}) (?<test_group>abc|\d{3})

Parsing Examples

Parsing an IP address

Extracting an IP address from complex log lines is one of the easiest processes when using a parse regex similar to:

... | parse regex "(?<ip_address>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | ...

Parsing multiple fields in a single query

Remember our rules; parsing multiple fields in one query is possible using regex. For instance, let’s decide to parse username and host information from logs, it should look like this:

... | parse regex "user=(?<user>.*?):" 
| parse regex "host=(?<msg_host>.*?):" 
| ...

Situation to use non-capturing groups

A situation may arise when you have to use non-capturing groups (?:regex). Such situations arise if you have multiple possibilities when matching the regular expression. Also, we can use the group syntax to specify alternative strings in a regular expression. Let’s look at the examples below:

Oct 11 18:20:49 host123.example.com 16234563: Oct 11 18:20:49: %SEC-6-IPACCESSLOGP: list 101 denied tcp 10.1.2.3(1234) -> 10.1.2.4(5678), 1 packet
Oct 11 18:20:49 host123.example.com 16234564: Oct 11 18:20:49: %SEC-6-IPACCESSLOGP: list 101 accepted tcp 10.1.2.5(4321) -> 10.1.2.6(8765), 1 packet

Extract the “Protocol” with the following query:

| parse regex "list 101 (accepted|denied) (?<protocol>.*?) "

So, this is what you would actually write:

| parse regex "list 101 (?:accepted|denied) (?<protocol>.*?) "

If  you need to capture whether it is “denied” or “accepted” into a field, then you can include this in the query:

| parse regex "list 101 (?<status>accepted|denied) (?<protocol>.*?) "

Parse Multi
Asides from parsing a field value, there is an option for parsing multiple values within a single log message. Hence, the multi-keyword directs the parse regex operator to look across all the values, especially in messages with a varying number of values. The multi-keyword will create copies of each message so that each value can be counted in a field.

Let’s look at another example of the parse multi regex with Amazon VPC flow logs with the same source and destination IP addresses:

_sourceCategory=aws/vpc 
| parse regex "(?<ip_address>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" multi
| count by ip_address, _raw
| where _count >1

The result/output

Parse Variable Patterns using Regex

Parsing Case insensitive Regex syntax

The Parse regex operator can use case insensitivity by inputting a regex parameter of (?i). For instance, let’s look at the following log:

Line1: The following exception was reported: error in log Line2: The following exception was reported: Error in log Line3: The following exception was reported: ERROR in log

The (?i) informs the parser to ignore case insensitivity for the following expression. So, the “error” in the case sensitive log can be matched with the following parse regex expression.

| parse regex "reported:\s(?<exception>(?i)error)\s"

The outcome should look like this in the following parsed fields:

Exception Message
ERROR Line3: The following exception was reported: ERROR in log
Error Line2: The following exception was reported: Error in log
error Line1: The following exception was reported: error in log

Summary How to Parse Variable Patterns using Regex

If you are into programming, and you are using the Parse regex operator, then your software will run faster because a simple regex engine is likely to outperform a top-notch plain text operator. Whether you are trying to parse case-sensitive variable patterns, multiple values, or non-capturing groups, using the Parse Regex operator can improve the entire process.

The post Parse Variable Patterns using Regex appeared first on FusionReactor Observability & APM.

]]>
How to Parse JSON data in JavaScript https://fusion-reactor.com/blog/evangelism/how-to-parse-json-data-in-javascript/ Thu, 10 Mar 2022 10:31:33 +0000 https://fusionreactor.dev.onpressidium.com/?p=67684 Assuming that you’ve used a web app before, there is a strong possibility that it uses JSON format to create a framework, store and transmit data between its servers and connected devices. JavaScript Object Notation, which is popularly known as … Read More

The post How to Parse JSON data in JavaScript appeared first on FusionReactor Observability & APM.

]]>

Assuming that you’ve used a web app before, there is a strong possibility that it uses JSON format to create a framework, store and transmit data between its servers and connected devices. JavaScript Object Notation, which is popularly known as JSON is a useful data format, similar to a text file.

In this article, we’ll briefly go over how we can encode and decode JSON data in JavaScript. But firsts, let’s take a look at a few differences between JSON and JavaScript.

Introducing JSON

JSON is a lightweight data format for data exchange between devices and servers, which is easy to parse and generate. While there are a few similarities between JSON and JavaScript due to the JavaScript Syntax inspiration, JSON is a text-based format which is based on two main structures;

  • Object: We can define JSON objects as an unordered collection of values/Key pairs (i.e. key:value). You can find a left curly bracket { starting the object like this and ends with a right curly bracket } which are separated by a comma.
  • Array: unlike their counterparts, arrays are ordered list of values, which begins with a straight left bracket and ends with a suitable bracket. Commas separate both arrays and Objects.

A closer look at JSON objects, we realize that the key pairs are always in strings, while the value pair are either a string, number, boolean, or even an object or an array. However, strings must be enclosed in double quotes. It can contain escape characters such as \n, \t and \. , which may look this way:

{
"book": {
"name": "Harry Potter and the Goblet of Fire",
"author": "J. K. Rowling",
"year": 2000,
"genre": "Fantasy Fiction",
"bestseller": true
}}

A JSON array would look like this:

{
"fruits": [
"Apple",
"Banana",
"Strawberry",
"Mango"
]}

Parsing JSON Data in JavaScript

There are a few ways you can parse data in JavaScript. With the JSON.parse() method, you can quickly parse JSON data transmitted from the webserver. The JSON.parse method parses a JSON string and then constructs a script value described by the string. Also, all invalid JSON strings get a syntax error result.

For example, let’s assume the following JSON-encoded string was transmitted from our web server:

{"name": "Peter", "age": 22, "country": "United States"}

We can apply the JavaScript JSON.parse() method to convert this JSON string into a JavaScript object and use the dot notation (.) to access individual values. It should look like this:

// Store JSON data in a JS variablevar json = '{"name": "Peter", "age": 22, "country": "United States"}';
// Converting JSON-encoded string to JS objectvar obj = JSON.parse(json);
// Accessing individual value from JS objectalert(obj.name); // Outputs: Peteralert(obj.age); // Outputs: 22alert(obj.country); // Outputs: United States

Parsing Nested JSON Data in JavaScript

Nesting is another possibility of what can be done with a JSON object and array. Since a JSON object contains other JSON objects, arrays, nested arrays, etc., let’s use the following scenario to parse a nested JSON object and then extract all the values in JavaScript.

/* Storing multi-line JSON string in a JS variable
using the new ES6 template literals */var json = `{
"book": {
"name": "Harry Potter and the Goblet of Fire",
"author": "J. K. Rowling",
"year": 2000,
"characters": ["Harry Potter", "Hermione Granger", "Ron Weasley"],
"genre": "Fantasy Fiction",
"price": {
"paperback": "$10.40", "hardcover": "$20.32", "kindle": "$4.11"
}
}
}`;
// Converting JSON object to JS objectvar obj = JSON.parse(json);
// Define recursive function to print nested valuesfunction printValues(obj) {
for(var k in obj) {
if(obj[k] instanceof Object) {
printValues(obj[k]);
} else {
document.write(obj[k] + "<br>");
};
}};
// Printing all the values from the resulting objectprintValues(obj);
document.write("<hr>");
// Printing a single value
document.write(obj["book"]["author"] + "<br>"); // Prints: J. K. Rowling
document.write(obj["book"]["characters"][0] + "<br>"); // Prints: Harry Potter
document.write(obj["book"]["price"]["hardcover"]); // Prints: $20.32

Encoding Data as JSON in JavaScript

If you are familiar with the workflow of Ajax communication, then you will understand that sometimes you can transfer javaScript objects from your code to your server. However, JavaScript provides a specific string JSON.stringify() method to process this communication, which converts a JSON string as it is shown below:

Stringify a JavaScript Object

We’ll use the following scenario to convert a JavaScript object to JSON strong:

// Sample JS objectvar obj = {"name": "Peter", "age": 22, "country": "United States"};
// Converting JS object to JSON stringvar json = JSON.stringify(obj);alert(json);

The result should look like this:

{“name”:”Peter”,”age”:22,”country”:”United States”}

Stringify a JavaScript Array

This is easy to do. Let’s use the following example to convert a JavaScript array to JSON strings:

// Sample JS arrayvar arr = ["Apple", "Banana", "Mango", "Orange", "Papaya"];
// Converting JS array to JSON stringvar json = JSON.stringify(arr);alert(json);

The result should look like this:

[“Apple”,”Banana”,”Mango”,”Orange”,”Papaya”]

How to Parse JSON data in JavaScript

We’ve successfully covered everything you need to know about how to encode and decode JSON data in JavaScript. Now get out there and parse or stringify JSON as you like.

The post How to Parse JSON data in JavaScript appeared first on FusionReactor Observability & APM.

]]>
New beta release – log alerting https://fusion-reactor.com/blog/evangelism/new-beta-release-log-alerting/ Thu, 24 Feb 2022 15:00:25 +0000 https://fusionreactor.dev.onpressidium.com/?p=67329 Our most powerful alerting engine Log Alerting is now in beta and all beta users will immediately have access to this powerful query-based alerting engine. Set alerts on FusionReactor logs and ingested third-party logs. Your log data can be used … Read More

The post New beta release – log alerting appeared first on FusionReactor Observability & APM.

]]>

Our most powerful alerting engine

Log Alerting is now in beta and all beta users will immediately have access to this powerful query-based alerting engine. Set alerts on FusionReactor logs and ingested third-party logs.

Your log data can be used to set powerful alerts, including:

  • Alerts if a particular log event happens
  • Alerts based on the rate of change within a value
  • Alerts on the count of logs of a particular type
  • Alerts based on metrics contained in your logs

Alerting when you need it and how you need it

Send alerts to a number of subscriptions including:

  • Email
  • Slack
  • Pagerduty
  • Ops Genie
  • Webhooks

For more information visit the documentation.

FusionReactor Integrations, alerting

Join our beta program

Log alerting is now part of our beta program but will be in production shortly. To find out what’s next in our pipeline register for our next webinar.

The post New beta release – log alerting appeared first on FusionReactor Observability & APM.

]]>
What are logs and why do I need to monitor them? https://fusion-reactor.com/blog/evangelism/what-are-logs-and-why-do-i-need-to-monitor-them/ Wed, 02 Feb 2022 14:47:34 +0000 https://fusionreactor.dev.onpressidium.com/?p=67345 What are logs, and why do I need to monitor them? Let’s start by clarifying the basics. Logs are recordings of events that have taken place while an application runs. Some are automatically created, and some are explicitly developed – … Read More

The post What are logs and why do I need to monitor them? appeared first on FusionReactor Observability & APM.

]]>

What are logs, and why do I need to monitor them?

Let’s start by clarifying the basics. Logs are recordings of events that have taken place while an application runs. Some are automatically created, and some are explicitly developed – for example, when we debug. The application, process, server, or container will all create many logs

What is logging used for?

The point of a log file is to keep track of what’s happening behind the scenes. For example, if something should happen within a complex system, you have access to a detailed list of events before and during the malfunction.

Why use a log monitor?

Systems within an organization will create huge volumes of data. The purpose of a log monitor is to provide insight by ingesting them and then giving you the tools to: –

  • Store
  • Compare
  • Visualize
  • Graph
  • Analyze
  • And alert you when things go wrong.

Deep insight into what went wrong so you can fix it

  • log monitoring tool enables you to interrogate your logs using a query language such as LogQL. The resulting query will tell you why an event such as an error or outage occurred. Monitoring logs will allow you to compare logs from different sources to build a complete picture.
  • Visualization tools will help you make better sense of the raw data; graphs are often easier to interpret and spot trends.
  • Log monitoring can help protect you, allowing you to see things like substantial request rates, which could mean a server attack.
  • Traditional debugging uses logs to give you the extra context. In fact, tools like FusionReactor’s patented production debugger creates log files to allow you to find exceptions in production. Automated route cause analysis tools such as FusionReactors Event Snapshot creates logs that enable it to take the user to the precise thread that an error fired on. Logs can help you find errors that you didn’t know you had, errors that aren’t usually picked up by your monitor, such as handled errors within a try-catch block that hides the actual error for the end-user.

Powerful dashboards and alerting

You can explore logs manually, but dashboards allow you to create dynamic views that give you a summary of application health based on log data. You can then drill down into log content from the dashboard to find the root cause of an issue.

You can use logs as an alert engine. You create queries alerts for different scenarios such as

  • Event-based alerts that inform you when a specific event occurs
  • Rate-based alerting looks for rises or falls in
  • Value-based alerts
  • Comparative alerts from log content

Your log alert engine will continually monitor your log files until a condition is met and then alert you to your preferred channel.

With all that power comes responsibility

Once installed, FusionReactor gives you instant access to deep insight. Its out-of-the-box configurations make using it simple.

Things change slightly with log monitoring because FusionReactor will ingest any log you throw at it. Consequently, you can add logs from any system into FR: developer, SRE, DevOps, Sales, Finance, literally anything. So you get to choose which logs you send to us to monitor.

No standards

Logging is a little more tricky; there is no standard for logs, different organizations use different formats, so making sense of them can be challenging. For example, a pattern parser allows you to convert log lines into more contextual data.

Distributed systems

Trying to tie logs from distributed systems together is a nightmare and could make triage times longer, the opposite of what you need. Using logical and well-structured labels allows you to create well-linked logs.

Security and privacy issues

Logs can contain confidential or financial data such as payment information, card numbers, or passwords and cause you privacy or security issues. Obfuscating logs allows you to keep everyone safe and compliant.

The post What are logs and why do I need to monitor them? appeared first on FusionReactor Observability & APM.

]]>
Using OpenTelemetry in Kubernetes https://fusion-reactor.com/blog-post/using-opentelemetry-in-kubernetes/ Mon, 24 Jan 2022 09:12:55 +0000 https://fusionreactor.dev.onpressidium.com/?p=67126 Using OTel auto-instrumentation/agents – OpenTelemetry in Kubernetes Maybe you heard about Opentelemetry, Kubernetes, or Opentelemetry in Kubernetes, and you don’t know what it is, or you want to learn more? This article will discuss how the OpenTelemetry (OTel) new collector … Read More

The post Using OpenTelemetry in Kubernetes appeared first on FusionReactor Observability & APM.

]]>

Using OTel auto-instrumentation/agents – OpenTelemetry in Kubernetes

Maybe you heard about Opentelemetry, Kubernetes, or Opentelemetry in Kubernetes, and you don’t know what it is, or you want to learn more? This article will discuss how the OpenTelemetry (OTel) new collector feature simplifies workloads deployment on Kubernetes.

Opentelemetry is an open-source project hosted by the Cloud Native Computing Foundation (CNCF) that provides a standard way to generate telemetry data. OpenTelemetry bridges the gap between service providers and users alike when collecting data from a diverse set of systems. This helps providers and users gain a broad picture of the application’s performance and the underlying reasons for the result.

As the adoption of Cloud-native ecosystems continues to grow, the complexities of distributed systems and microservices will also expand, resulting in the need for a simple and scalable way to deploy workloads on Kubernetes.

One of the most tedious processes when deploying an observability solution is instrumentation. Presently, there are two approaches to instrument an application: manual/explicit and automatic instrumentation.

Manual/explicit: Developers can use pre-built instrumentation libraries or Opentelemetry APIs to instrument the application’s source code.

Automatic: the entire instrumentation process is on automation without any code modification and recompilation

The dawn of Opentelemetry changes a lot when automating instrumentation. What used to be a proprietary technology delivered by several Application Performance Monitoring (APM) or observability vendors, users can enjoy vendor-neutral instrumentation as an open-source technology.

Regardless of the possibility of using Opentelemtry as an open-source technology, deploying Opentelemtry auto-instrumentation at scale or validating proof of value on Kubernetes can still be a major problem. It’s hard to ignore the increasing number of immutable containers in modern applications. This necessitates that adding instrumentation to existing images requires rebuilding containers, resulting in a tedious and costly project. Using Opentelemtry Operator’s new features on Kubernetes solves this problem.

Instrumentation CR in OTel operator used in OpenTelemetry in Kubernetes

The new auto-instrumentation agent, OpenTelemetry 0.38.0, introduced a power-packed feature known as Instrumentation custom resource (CR). This feature defines the configuration for OpenTelemtry SDK. Instrumentation CR presence in the cluster and namespace of workload annotation enables SDK instrumentation in a simplified manner.

kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
- b3
sampler:
type: parentbased_traceidratio
argument: "0.25"
java:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest
nodejs:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:latest
python:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:latest
EOF.

Presently, the instrumentation feature is supported across different languages such as Java, NodeJS, and Python once the workload or namespace is presented with the following annotation for different languages.

  • instrumentation.opentelemetry.io/inject-java: “true” — for Java
  • instrumentation.opentelemetry.io/inject-nodejs: “true” — for NodeJS
  • instrumentation.opentelemetry.io/inject-python: “true” — for Python

From here, the operator can introduce OpenTelemetry auto-instrumentation libraries into the application container and then configure the instrumentation so the auto-instrumentation agent can export data to an endpoint in the instrumentation CR.

Java example with Spring Pet clinic

Looking at this from another perspective, let’s deploy the Spring Petclinic Java application with Opentelemetry collector in mind. The application will be instrumented and data communicated to an OpenTelemetry collector.

First, let’s create the following deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-petclinic
spec:
selector:
matchLabels:
app: spring-petclinic
replicas: 1
template:
metadata:
labels:
app: spring-petclinic
annotations:
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-java: "true"
spec:
containers:
- name: app
image: ghcr.io/pavolloffay/spring-petclinic:latest
and apply the instrumentation annotation:
kubectl patch deployment.apps/spring-petclinic -p '{"spec": {"template": {"metadata": {"annotations": {"instrumentation.opentelemetry.io/inject-java": "true"}}}}}'

Something unique happens when the instrumentation annotation is applied. The spring-petclinic pod restarts its operation and the newly started pod will be instrumented with OpenTelemtry Java auto-instrumentation. OTLP reports the telemetry data to the collector at http://otel-collector:4317.
We have attached the following code snippet to guide you through the deployment process.

kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
HTTP:
processors:

exporters:
logging:

service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
EOF

If you look closely, you can see that we can port-forward the application HTTP port via kubectl port-forward deployment.apps/spring-pet clinic 8080:8080 and review the application on the WEB. The OpenTelemetry collector can view the data spans via kubectl logs deployment.apps/otel-collector.

Application Instrumentation can be a complicated process depending on the language applied. For instance, when we look at Java, the auto-instrumentation is called Java agent, and it does bytecode manipulation. This procedure injects instrumentation points to unique code paths. After that, the points create telemetry data when the final code is executed.

How is the injection logic implemented?

The process might sound tricky, but let me explain using Java agent, although similar standards for other runtimes. The OpenTelemetry operator implements a special feature called mutating admission webhook. This feature is invoked during the creation of the Pod object, and during the process, the webhook modifies the Pod object to introduce auto-instrumentation libraries into the application container.

The Javaagent is introduced into the application container through an init container that copies that Javaagent into a volume mounted to the application container. Configuring the SDK is done by introducing environment variables.
Into the application container.

The final step is to configure the JVM to utilize the auto-instrumentation agent. The Javaagent can do this by configuring the environment variable to utilize the Javaagent.

Conclusion – OpenTelemetry in Kubernetes

In conclusion, we expect to see continued investment in Kubernetes, especially in workload deployment. OpenTelemetry auto-instrumentation on Kubernetes can simplify workload deployment with the operator pattern. There is no need to change the application container, resulting in a high value for a scalable way to transmit telemetry solutions. Presently, only Java, Python, and NodeJS runtimes are supported; hopefully, other languages will continue to gain momentum.

The post Using OpenTelemetry in Kubernetes appeared first on FusionReactor Observability & APM.

]]>
Understanding Metrics in OpenTelemetry (OTel) API https://fusion-reactor.com/blog-post/understanding-metrics-in-opentelemetry-otel-api/ Fri, 21 Jan 2022 14:25:14 +0000 https://fusionreactor.dev.onpressidium.com/?p=67101 So you may have heard about the OpenTelemetry or OTel Metrics API. This article will explain the concept of metric, metric instruments and their functions, metric providers, and give you a practical example of implementing metric instruments.  When we talk … Read More

The post Understanding Metrics in OpenTelemetry (OTel) API appeared first on FusionReactor Observability & APM.

]]>

So you may have heard about the OpenTelemetry or OTel Metrics API. This article will explain the concept of metric, metric instruments and their functions, metric providers, and give you a practical example of implementing metric instruments. 

When we talk about OTel or OpenTelemetry, we are simply referring to a group of tools, SDKs, and APIs that are used to generate, instrument, collect, and export telemetry data (such as metrics, traces, and logs) for better analysis and understanding of the performance of a given software at runtime.

Given all this information, one may wonder what an OpenTelemetry Metric API is and why you should care. The Metric API has a design that supports the explicit processing of raw measurements to reveal continuous summaries that give developers the visibility they need. The OpenTelemetry metric API enables capturing measurements about the execution of a computer program. 

As we gradually jump right into the whole OpenTelemetry metric API brawl, let’s look at some developers’ points of view. Most developers know the capabilities of metrics in somewhat way. They are familiar with monitoring metrics using alerts to indicate when a service violates a predetermined threshold, process memory utilization, or error rates. At the same time, others are familiar with event streaming strategies such as aggregating and recording metrics by tracing or logging systems. Having this in mind, let’s move further to look at instruments within the Opentelemetry Metrics API. 

One of the special features of the Metrics API is that it distinguishes between the metric instruments at the semantic level rather than the eventual type of value they export. We can say the word “semantic” refers to how we give meaning to metric events, as they occur at runtime. Understanding this gives us an overview of the Metric instruments in OpenTelemetry and their functions. 

OpenTelemetry Metric Instruments and their functions

The OpenTelemetry or OTel Metric API provides six metric instruments. These instruments are related to a specific meter API, which becomes the user-facing entry point to the SDK. To be more specific, each instrument supports a single meter function to assist the instrument’s semantics. 

The meter instruments can be synchronous or asynchronous. The latter instruments have a distributed context and are inherent inside a request. Counter and UpDownCounter form the two additive instruments supporting an  Add()  function. On the other hand, ValueRecorder forms the synchronous non-additive instrument. This supports a Record() function as it captures metric event data.

Meanwhile, Asynchronous instruments are defined by a callback, which happens once per collection interval. There are mainly two asynchronous additive instruments, SumObserver and UpDownObserver, while the non-additive instrument is ValueObserver. Note that all three instruments support an Observe() function, implicating that they capture one value per measurement interval. 

Metric events captured through any instrument will consist of:

  • value (signed integer or floating-point number)
  • resources associated with the SDK at startup
  • distributed context (for synchronous events only)
  • timestamp (implicit)
  • instrument definition (name, kind, description, unit of measure)
  • label set (keys and values)

 Here is a quick summary of the six metric instruments and their properties:

Name Synchronous Additive Monotonic Function
Counter Yes Yes Yes Add()
UpDownCounter Yes Yes No Add()
ValueRecorder Yes No No Record()
SumObserver No Yes Yes Observe()
UpDownSumObserver No Yes No Observe()
ValueObserver No No No Observe()

Metric Provider

When you initialize and configure an OTel or OpenTelemtry Metrics SDK, a concrete MetricProvider can be implemented. Once configured, the application chooses which instance to use, whether a global instance or a dependency injection, for a bigger control over the entire configuration process. Check out the Metric API specification for more details about implementing a MeterProvider.

Metric and Distributed Context

There is a strong relationship between the distributed context at runtime and the synchronous measurements, spanning the context and correlation values. Correlation values impact OpenTelemetry as it supports the generation of labels from one process to another in a distributed computation. We can easily configure the correlation context using the (WIP) Views API to select specific key correlation values applied as labels.

Implementing metric instruments: example

Several languages work with OpenTelemetry API. So by design mechanism applied to implement each metric instrument can vary between implementations. This means that the general specification might not match every new metric event created. At this point, consulting the documentation for that specific guide becomes very important. Let’s begin the implementation process.

 The first step is to create an instrument and describe it with a name. also a good idea to provide label keys to optimize the metric export pipeline and initialize a LabelSet for both keys and values that align with attributes on your metric events. 

1 // initialize instruments statically or in an initializer, a Counter and a Value Recorder
2 meter = global.Meter(‘my_application”)
5 requestBytes = meter.NewIntCounter("request.bytes", WithUnit(unit.Bytes))
6 requestLatency = meter.NewFloatValueRecorder("request.latency", WithUnit(unit.Second))78 // then, in a request handler define the labels that apply to the request9 labels = {“path”: “/api/getFoo/{id}”, “host”: “host.name”}

Again, it’s important to know that the standard implementation strategy remains the same, even though you might experience some slight changes with the language used. Once the names are given, recording metric events become pretty straightforward.

1 requestBytes.Add(req.bytes, labels)
2 requestLatency.Record(req.latency, labels)
)8 …9 }

The post Understanding Metrics in OpenTelemetry (OTel) API appeared first on FusionReactor Observability & APM.

]]>
Obfuscation added to log monitoring https://fusion-reactor.com/blog/evangelism/obfuscation-added-to-log-monitoring/ Tue, 18 Jan 2022 15:33:08 +0000 https://fusionreactor.dev.onpressidium.com/?p=67053 Obfuscation in logs is now in beta Passwords, credit card information, and API keys are automatically obfuscated in logs sent from FR. However, should you have switched obfuscation in logs off in the UI, then you may need to manually … Read More

The post Obfuscation added to log monitoring appeared first on FusionReactor Observability & APM.

]]>

Obfuscation in logs is now in beta

Passwords, credit card information, and API keys are automatically obfuscated in logs sent from FR. However, should you have switched obfuscation in logs off in the UI, then you may need to manually obfuscate them.

You can create multiple obfuscation rules, allowing obfuscation to occur on different file groups and text patterns.

Description Log Pattern Regex Pattern Replace Value
Remove any password in any log file *.log .*passw.*[a-z]=.* passwordRedacted
Remove any auth values from any access log *access.log .*auth.*[a-z]=.* authRedacted
Remove any IP address in Nginx logs /opt/nginx/* .*client_ip\:.* ipRedacted
Remove any credit card information from FusionReactor logs /instance/tomcat/logs/* ^(?:4[0-9]{12}(?:[0-9]{3})? cardInfo

Logs generated by the FusionReactor agent are shipped automatically to the cloud with no additional configuration required. You can prevent logs from FR from being sent to the cloud through blacklisting. This is good if you wish to save data or have security worries. Blacklisting is easy and, uses regex pattern matching. See our docs for how to do this.

Join our beta program

Log monitoring and obfuscation in logs are currently part of our beta program but will be in production shortly. To find out what’s next in our pipeline register for our next webinar.

The post Obfuscation added to log monitoring appeared first on FusionReactor Observability & APM.

]]>
Why monitor a monolithic server https://fusion-reactor.com/blog/evangelism/why-monitor-a-monolithic-server/ Fri, 07 Jan 2022 14:12:32 +0000 https://fusionreactor.dev.onpressidium.com/?p=66943 Why monitor a monolithic server? It is quite challenging to improve the performance of a monolithic server application. Change is difficult when things are tightly coupled. Unlike microservices, where things are more likely to be managed by development teams. However, … Read More

The post Why monitor a monolithic server appeared first on FusionReactor Observability & APM.

]]>

Why monitor a monolithic server?

It is quite challenging to improve the performance of a monolithic server application. Change is difficult when things are tightly coupled. Unlike microservices, where things are more likely to be managed by development teams. However, optimizing the performance of a monolithic application is not impossible, particularly if you have the right toolset.

There are many tools that will help, many mainstream APM’s have left monolithic servers behind and are focussing on microservices and distributed systems. FusionReactor has an on-premise APM that was built specifically to improve performance on monolithic servers. It also has a Cloud observability platform that looks after distributed systems. Both the on-premise and the Cloud versions continue to be developed by the Intergral GmbH software house in Germany.

Possible solutions to monolithic application issues

  • People move from one team to another or from one organization to another as time goes on. Due to the centralized nature of monolithic code-bases, with time a specific portion of code might become idle. Consequently, the piece of code may no longer belong to anyone.
  • With a team working on different components of the code, there is no clear separation of services, and there is no contract or SLA.
  • The lack of separation of services makes it difficult to find the regression. Consequently, it can be difficult to discover the root cause when a component starts to degrade.

Improve the performance of your monolith

  • An API often accumulates a lot of data as it keeps on being used. As a result, these become obsolete with time, so they should be disposed of if not needed.
  • Putting similar tasks together in parallel can be useful if their nature and performance are comparable
  • Using a request-level cache is important in a monolithic architecture because there are many calls within a request, which means it has a greater chance of getting invoked. Use the Request Scope Cache for evaluation. Therefore, the first invocation is cached during execution.

A good APM will enable you to improve the performance of a monolith system both quickly and with relative ease. Changing anything within a monolith is difficult as things are so tightly coupled together.

Maintaining monolithic servers is often difficult due to code decay and technical debt

50% of developers use both monolith and distributed environments, according to a recent survey. The reason for this is that many older applications are vital for infrastructure but suffer from technical debt and code decay. They function okay, so rewriting them would not be the best option, since developing new applications often provides a higher return on investment. Monitoring monoliths is equally important to monitoring distributed systems. Arguably more so as apps on monoliths are often written in older code, that modern developers simply no longer learn. Applications like FusionReactor APM enable older applications to run efficiently with no need to recode or migrate.

Monitoring a Java Virtual Machine (JVM)

Java’s application architecture is built around the JVM (Java Virtual Machine). In essence, it interprets and converts Java byte code to host platform operations. In the event that the Java Virtual Machine (JVM), which is used by all Java middleware, such as Tomcat, JBoss EAP, WildFly, GlassFish, and WebSphere, experiences performance issues, the impact on services they support can be significant.

JVM monitoring is an essential part of any Java APM strategy. To resolve server-side bottlenecks, IT Ops and DevOps teams use JVM performance metrics, and JVM monitoring can also help developers and architects by discovering code-level problems.

Using an APM such as FusionReactor, you can identify code-level bottlenecks such as thread synchronization issues, deadlocks, memory leaks, garbage collection issues, and insufficient heap memory.

How can an APM help manage a monolithic server?

Automatic Root Cause Analysis

automatic root cause analysis

When a critical error or exception occurs within your monolithic server, an automatic Root Cause Analysis (RCA) capability can alert the developer immediately. In FusionReactors’ Event Snapshot, developers can view the complete source code, stack trace, variables, and environment state of errors at the point of occurrence. As a result, you will save several hours debugging and dramatically reduce Mean Time to Detect (MTTD).

Debugging in a production environment

QA phases, staging environments, and automation have all been used to prevent bugs from reaching production. Sometimes, however, bugs still reach production. Whenever they do, we need a strategy to deal with them in a safe and efficient manner.

Today’s debugging tools allow you to safely and reliably debug in a production environment without affecting users or causing downtime.

What makes it safe to debug monoliths in production?

There are a few things to keep in mind when debugging in production;

  • Performance is not significantly affected by debugging
  • You can still use your app and debug at the same time
  • Secure data is not accessible from the outside
  • Bugging provides you with enough information to locate and resolve the issue as soon as possible.

When you are debugging, you want as much information as possible in the shortest amount of time. It only adds to the stress of dealing with a critical issue to jump between multiple systems and retry to fix the bugs several times.

Use continuous profiling to find performance issues

Continuous profiling is the process of collecting data on application performance over time. Consequently, application developers analyze this data in production environments.

Continuous profiling is used to determine which components, methods, or lines of code are the most resource-intensive. This insight can be used to improve the performance of the profiled application and to understand runtime behavior.

FusionRector Ultimate has a number of continuous profiling tools including

Continuous code profiler

The Code Profiler makes it easy to run code performance analysis in your production environment at low overhead. Since profiles are generated automatically, you won’t miss any issues.

FusionReactor’s Code Profiler provides instant insight into how your application is performing down to the method level.

Continuous Thread profiler

Using continuous thread profiling and stack trace analysis, you can track down performance issues on your monolith quickly and efficiently. With an APM that includes a thread profiler, like FusionReactor, you can quickly profile or stack an individual thread to identify performance, deadlock, and concurrency issues.

Continuous memory profiler

You can gain a detailed understanding of Java memory spaces and garbage collection with continuous memory profilers. Consequently, you can identify memory leaks and optimize memory usage in your production Java applications by using the FusionReactor low overhead memory profiler and getting instant insight into the heap.

What is a memory leak?

An application defect causes a memory leak. Consequently, when an object retains memory and cannot be collected due to its reference to another live object, it is considered a memory leak. Therefore, leaked objects are accessible from at least one GC Root or are themselves GC Roots. Therefore, leaked objects have a path that starts with GC Roots and ends with the leaked object.  Read our article on how to find memory leaks in your application for further details.

By utilizing heap utilization, the memory profiler detects possible memory leaks or excessive object creation in real-time.

Continuous CPU profiler

With a CPU profiler in your APM, you can find and tune inefficient processes running on your monolithic application server.

Java Profiler is a low-overhead tool that displays the code that’s being executed. As a result, you can determine what functions are running that might slow a thread down.

Conclusion – so how do you improve the performance of a monolithic server?

Monolithic performance can be challenging and hard to manage. However, to better manage it, you need to get deep insight into the metrics. Sometimes the simplest things are the most time-consuming. Consider using tools such as profilers and root automated cause analysis, not only to gauge their performance but also to quickly identify the reason if something goes wrong.

The post Why monitor a monolithic server appeared first on FusionReactor Observability & APM.

]]>
OTel Tracing stability milestone reached https://fusion-reactor.com/blog/news/otel-tracing-stability-milestone-reached/ Tue, 04 Jan 2022 18:42:11 +0000 https://fusionreactor.dev.onpressidium.com/?p=66865 OTel Tracing stability milestone reached OTel Tracing stability milestone reached with OpenTelemetry Collector. As a result, OpenTelemetry Collector has released its first GA version. With version 0.36.0, the tracing components come with stability guarantees for tracing in the OpenTelemetry Protocol … Read More

The post OTel Tracing stability milestone reached appeared first on FusionReactor Observability & APM.

]]>

OTel Tracing stability milestone reached

OTel Tracing stability milestone reached with OpenTelemetry Collector. As a result, OpenTelemetry Collector has released its first GA version. With version 0.36.0, the tracing components come with stability guarantees for tracing in the OpenTelemetry Protocol (OTLP) as well as end-to-end support for collecting, processing, and exporting traces. Therefore, Traces, and the components that process them, will have a stable API and configuration even as they continue to work to stabilize metrics and logs. Using the latest semantic conventions, the Collector now provides a common set of attributes, and semantics for those attributes, ensuring that consistent metadata is available for all telemetry data.

Tracing can now be used with confidence in production thanks to this milestone for OpenTelemetry.

Collector release highlights include:

  • Full support for OTLP v0.9.0
  • Updated to the latest specification semantic conventions
  • Auth and config improvements
  • Improvements to the pdata API used in Collector components
  • Removal of dependencies on deprecated components

In other project updates, tracing support is also stable for the OpenTelemetry language libraries for Java, Go, .Net, Python and C++. Consequently, with OpenTelemetry API/SDK, you can now instrument your production applications and begin collecting trace data.

What does Otel Tracing stability mean?

The OTel Tracing is a credible tool which you can use in production with confindence.

Metrics in OpenTelemetry

OpenTelemetry Metrics APIs allow the capture of measurements about the execution of a computer program at run time. Developers can access their service’s operational metrics by using the Metrics API, which is designed specifically to process raw measurements and produce continuous summaries of those measurements.

The post OTel Tracing stability milestone reached appeared first on FusionReactor Observability & APM.

]]>