The revolution will be verbosely {,b}logged

The New Papertrail Dashboard

Posted by @coryduncan on

We’ve recently rolled out several long-overdue improvements to the Papertrail™ dashboard. The old dashboard was adequate, but it didn’t do a great job of showing larger accounts with many groups, searches, or systems. Lack of personalization was another issue. For example, it assumed every user cared equally about every group in the account. The new dashboard addresses these problems with a flexible design that is designed to tailor to individual preferences.

Favorite groups

As the number of groups increases, it’s difficult to find the specific groups that interest you. With favorite groups, you can now tailor your dashboard to only show those groups.

Favorite a group by toggling the star icon next to the group name:

Traffic lights

Last year we released a feature called “traffic lights” - a visual indicator of event status for each system. Now we brought something similar to the dashboard, showing a traffic light for each group. A yellow or red light means the entire group has been inactive for one hour or one day, respectively. This can be used to ensure a group of systems are still logging after a deployment or no longer logging after being decommissioned.

Grid / List toggle

There are two ways to view groups on the dashboard - as a grid or list. The grid view shows expanded information and is enabled by default. The list view, on the other hand, shows a concise view, which can be useful when comparing group attributes for a large number of groups.

Log data transfer usage bar

Being aware of your account’s log data transfer usage is important because each plan includes a specific usage limit. When usage exceeds or gets close to the plan limit, it may be time to add event filters or change plans. Log data transfer usage has always been available in settings, but is now easier to spot at the bottom of the dashboard.

Responsive UI

The new dashboard is designed to work well on any screen size. Use the dashboard from your desktop, mobile device, or anything in between.

We hope you enjoy the new dashboard experience. If you have any feedback, please send it our way.

Get Insights into Azure with Papertrail

Posted by Jennifer Marsh on

When your infrastructure doesn’t offer the scalability to add hardware and applications without huge monetary investments, you can turn to cloud hosting. Microsoft Azure caters to businesses with mainly Windows environments and hosting can be difficult to monitor as you scale up resources. As you add more VMs and applications to your cloud, you may struggle to keep track of logs across the entire network. Every time you create a new VM, upload a new application, develop a new website, build a new database, or any other new resource, Azure produces a variety of logs stored in different locations. This can make it difficult to find the necessary information for monitoring your services or troubleshooting problems.

Later, we’ll show you how, with Papertrail™, you can stream your application logs directly to a central location. Its aggregated Event Viewer offers targeted monitoring, searches, and live tail functionality.

The Many Types of Azure Logs

Azure creates a number of logs depending on the resource you make. When you develop a new database, for instance, Azure generates activity and diagnostic logs that monitor changes you make from your Azure portal. You can read more about monitoring and logging activities in Azure here.

Additionally, with every installation of Windows, Event Viewer is included in the operating system to monitor events. Failed logins, security changes, system changes, and application events can be reviewed in Event Viewer. Whenever you host a website or deploy a .NET application to Azure, you have application logs you need to monitor. Monitoring logs from just one application doesn’t require much effort, but when you accumulate dozens across a variety of applications, Papertrail can help you aggregate and stream them to one centralized location.

Azure Log Analysis Tools

When you’re creating an application in Azure, Azure produces activity logs and makes a diagnostic interface available for review from your Azure portal in the Activity Log section of the resource group. The activity log interface provides a general overview of errors and data activity, such as the number of errors.

Azure Screenshot
(© 2018 Microsoft Corporation. All rights reserved.)

Azure has a number of log reports and diagnostic tools, including a PHP log analyzer that can generate a report of errors.

Azure Screenshot
(© 2018 Microsoft Corporation. All rights reserved.)

The output for the diagnostic report looks like the following:

Azure Screenshot
(© 2018 Microsoft Corporation. All rights reserved.)

The diagnostic report displays a list of errors and includes the time the errors occurred and the message.

Using Kudu to Access Logs

Azure has a number of resources that let you download files, and several ways to view Azure log files and migrate them. These files are stored as blobs due to their large amount of data.

Azure includes an application called Kudu that lets you view files in your browser. Kudu is available in the “Advanced Tools” section of your application resource group.

Image of recent errors
(Kudu© is listed as an Advanced Tool in the application resource group)

(Kudu© is listed as an Advanced Tool in the application resource group)

Click the CMD menu option in the Debug Console.

Image of Kudu Advanced Tools
(Kudu© menu options)

You then get a list of your files.

Image of Kudu Advanced Tools
(Kudu© file list)

Log files are located in the LogFiles directory.

Note: For a quick shortcut to Kudu, just type the following into your browser:

http://<yoursitename>.scm.azurewebsites.net

Kudu is great to get an idea of how many files you need to export and their sizes, but it’s clunky for file transfers. You can download your log files one by one from this interface, but you probably have several blobs you want to export.

Microsoft provides Azure customers with several options for managing files. Azure Storage Explorer gives you an Explorer-looking interface to view and manage files. AzCopy is a command-line utility which lets you move a file from an Azure storage location to your local drive or another HTTP directory.

Papertrail Logging Analytics Extends What Azure Offers

With the Papertrail solution, you can aggregate events straight to one location, then graph, search, monitor, and identify errors within your applications. The screenshot below shows a search based on the virtual machine name. This is useful if you want to view events only from a specific Azure service. Leaving the search phrase in the text box will show events only from that particular VM. The benefit is that if you know you have a certain VM acting up, you can filter out any noise from other applications and servers and focus on the VM giving you problems.

Papertrail event viewer
(The Papertrail Event Viewer search with associated graph)

Another benefit is graphing directly from the Papertrail Event Viewer. For instance, suppose you want to know the number of events logged on your VM within the last 30 minutes. Just click the “Graph” icon Graph icon next to your search and choose the time window in the top-right corner of the graphing section. Papertrail shows you an interactive graph with your logged events based on your search.

Send Application and Windows Event Logs to Papertrail

Enterprise organizations usually have several Azure applications spanning multiple resource groups in Azure. With Papertrail and NXLog, you can aggregate logs across multiple resource groups. Just connect your Windows event logs to the Papertrail service using the free NXLog agent. NXLog monitors the Windows event log, so any operational or application events logged to your Windows event log are passed to Papertrail.

The first step is to configure NXLog on your virtual server. Don’t forget to restart the NXLog service after you’ve installed it to start pushing logs to Papertrail. After you finish the installation and a reboot of the service, go to the Papertrail Event Viewer and you’ll see Windows events listed as they are logged on your VM.

Papertrail event viewer
(Windows Azure events in the Papertrail Event Viewer)

You can also stream application events from your own logs by pointing NXLog to “watch files.” In the NXLog “nxlog.conf” file, you’ll find this element commented out.

# Monitor application log files
<Input watchfile>
  Module im_file
  File 'C:\\path\\to\\*.log'
  Exec $Message = $raw_event;
  Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1;
  SavePos TRUE
  Recursive TRUE
</Input>

Change the File path to include your own custom log file for NXLog to watch and push to Papertrail. Here is an example:

# Monitor a single application log file
<Input watchfile2>
  Module im_file
  File 'C:\\Papertrail\\test.log'
  Exec $Message = $raw_event;
  Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1;
  SavePos TRUE
  Recursive TRUE
</Input>

In the above configuration, any time changes are made to C:\Papertrail\test.log and the events are sent to Papertrail. Note that the backslashes are important with these path configurations; if they are missing, the process will fail.

If you have a .NET development environment, you can use log4net installed on Visual Studio. After it’s installed, configure it for Papertrail and it will work simultaneously with your Visual Studio environment to log errors and exceptions.

Sending Azure Diagnostic Logs

For diagnostic logs, you need an agent such as Logstash or Fluentd to pull your logs from an Azure storage location and stream them to Papertrail. Before you start, you need to configure an Azure Event Hub to aggregate your logs. Also, give permission for Azure to provide access to the logs.

After you set up a hub, you need to set up an input plugin to read from Azure. There are several listed on the Fluentd plugin page and Logstash plugin page. For Papertrail, you then configure the remote syslog plugin to send the logs to Papertrail. For Logstash, you can use the syslog output plugin. Configure them to send logs to the log output destination shown in your Log Destinations account page.

Conclusion

Azure can be beneficial for any organization that needs to scale fast and is not looking to build out their own internal infrastructure. Monitoring a vast array of portal applications, however, can introduce some challenges. Papertrail can help by extending what Azure offers and helping you to aggregate event logs into a consolidated view that simplifies troubleshooting. Papertrail is designed to allow you to easily search, archive, filter, live tail, and graph application logs, which can make issue resolution across applications much easier. Sign up for a free trial today.

Simplicity in Dev Tools is a Lost Art

Posted by Jason Skowronski on

I remember the days when I’d develop using simple Linux command line tools. When I worked at Amazon almost 10 years ago I used a lot of old-school tools like Vim, ssh, and grep. It took some time to get familiar with them, but I figured them out just by reading a manual page or watching my coworkers. For better or worse, developer and ops tools are getting so complex we have to read books to become proficient. Newer tools offer more features and scalability, but it comes at the expense of complexity to learn and manage. How should we decide when to stick with simple tools and when to invest in more powerful or complex ones?

Complexity Creep

The search for development bliss has made our toolchains more complex. Instead of using a simple terminal text editor like Vim, now developers use ones like Atom or VS Code that include hundreds of modules. Package managers like NPM pull hundreds or thousands of transitive dependencies of often unknown origin. Instead of writing JavaScript directly, now we write in dozens of languages from ES6 to CoffeeScript to C++, all transpiled back to JS.

On the infrastructure side, the cloud software revolution freed us from managing hardware, but now we manage an even more complex set of microservices and orchestration tools. Microservices run inside frameworks… on app servers… which run inside containers… on Kubernetes… in the cloud… configured by terraform. Monitoring these services is no easy task either. Instead of using simple text files for logs, solutions like Elastic Stack shard data across a cluster of log servers and agents trace requests across distributed nodes.

This is just a small slice of the technologies developers need to learn. Take a look at this cloud native landscape from the Cloud Native Computing Foundation (CNCF).

image of cloud native landscape
© 2018 Cloud Native Computing Foundation. All rights reserved. CNCF Cloud Native Interactive Landscape

This is impressive, but can you imagine being a beginner and having to learn all these technologies? It’s enough to make your head spin! Every solution you add to your stack requires extra time to set up and maintain, extra time to learn and train employees, and extra mental space. You have to spend days comparing the pros and cons of different solutions, and when something goes wrong, you have to deal with all of that complexity to fix it. It makes one yearn for the days of simpler tools.

Innovators Should Value Agility

Too often I see startups adopting the latest buzzword technology, whether it’s a new JavaScript framework on the front end or an orchestration framework on the back end. Usually these new technologies are more complex and introduce more risk than the less exciting and mature ones.

The vast majority of new products don’t need enterprise-scale infrastructure right away. Using overly complex solutions too early can lead to technical debt as your needs change. Creating a minimum viable product (MVP) isn’t the only way to stay lean; also consider your minimum viable infrastructure. Invest in solutions that are simple initially, and swap them out as your needs change.

The most important thing startups and innovators need to be successful is AGILITY. If you are still reaching product market fit or pivoting your solution, you need to quickly validate the hypotheses for your product with minimal waste or overhead. The needs for your project and architecture will inevitably change as you adapt to market needs. To be agile, you need tools that allow for fast iteration and that can be quickly adopted and changed when needed.

Use the Right Tool for the Job

When you don’t need complexity, take advantage of simpler dev tools. If I have to make a quick text edit, I’ll use simple and fast text editor like Vim. If I’m diving into a complex project, I’ll take the time to load the more complex VS Code or Atom.

When starting a new project, think about using a managed platform like Heroku so infrastructure is out of the way. Setting up an EC2 server and load balancer might not seem hard, but now imagine automating deployments several times per day with CI/CD, hosting different versions, and scaling it, and you see how complexity starts to add up.

Also, keep your monitoring tools simple so your ops team isn’t bogged down setting them up. Tools like Splunk and Elastic Stack are powerful, but take more time to set up on a cluster including the agents, the search indexers, and the front-end UI.

When you’re starting a new project, Papertrail™ is an easy logging solution to use. It handles log management for you in the cloud so you don’t need to manage a log server. You don’t need to install agents, and you’ll be done typically in 45 seconds. Its live tail mode looks like what you’d see in your Linux terminal—and it works nearly as fast!

Simplicity in your dev tools and infrastructure can clear space in your brain. Don’t waste too much time on fancy tools early on. Save your brain space for things that really matter to your business, like building a better product and helping your customers.

Modern Apps Make Log Aggregation More Important Than Ever

Posted by Jennifer Marsh on

With the popularity of microservices, cloud integration, and containers, the distribution of log files can get out of hand. If you have several dozen applications distributed across the cloud, it gets difficult to aggregate and review logs when something goes wrong. When you distribute applications in this way, log aggregation is more important than ever to quickly analyze and fix problems.

Imagine a scenario where one of those applications crashes and you need to find the cause and fix it. Operations administrators and developers have to dig across the network to find the right log that gives them the right answer. Without log aggregation, this can add hours to analysis, and every minute counts as downtime persists and damages your customer experience.

Distributed Apps and Logging Integration

Microservices

The concept of microservices changes the traditional way coders build applications. Instead of one monolithic codebase, small autonomous services are built based on a particular function. Since each of these microservices have their own codebase, they also have their own logs. When one microservice crashes, it could affect others, making it difficult to track bugs.

Containers

Instead of one monolithic codebase, the system is built on small modular components deployed on containers. These containers are unlike virtual machines in that they are dependent on the underlying operating system. A platform like Docker has built-in support for capturing logs in JSON files, but the system operator handles aggregating logs for analysis.

Serverless architecture

Your developers no longer need to focus on the infrastructure that hosts the application. They deploy applications to AWS and host them in the cloud. This ultimately removes much of the hardware and configuration overhead for developer projects, but means logs are stored with the cloud provider hosting the architecture, such as CloudWatch Logs.

Multi-cloud distribution

AWS, Azure, Google Cloud, Digital Ocean, the list goes on. You may even have a hybrid model with your own on-prem private cloud. After a critical production issue, your administrators must download and combine logs for a holistic view on the health of your infrastructure.

Edge computing

Edge computing allows you to run functions close to the client to reduce latency and bandwidth needs. This model is common for CDNs that use edge servers at data centers across the globe. Each server delivers content based on the user’s location, speeding up content delivery. However, each one also creates its own logs which need to be centralized for analysis.

IoT and mobile computing

Apps deployed to IoT and mobile devices have their own set of logs that are stored or deleted on a device. Without a centralized logging or crash reporting solution, your support team must ask the customer to manually send logs for troubleshooting, which is cumbersome and slows time to resolution.

Log Aggregation is More Important Than Ever

Aggregating logs is important because it’s not always obvious which system is the culprit. Administrators must comb through logs on separate platforms to find an error that gives them a clue to help them find a solution. Even if the root of the problem is found, a domino effect could corrupt data or cause applications among other services to fail. Repairing a suite of applications can take weeks when logs are fragmented across several systems.

With logs at each service’s location, administrators and developers can’t get a full picture. They pool them together, collect them, transfer them all to a centralized storage location, and then perform analysis. Then, transactions across your infrastructure can be traced downstream to the service that’s the root cause.

The answer to this fragmented logging issue is to provide one pane of glass in a centralized location that lets you see your entire application environment.

Papertrail and Log Aggregation

Papertrail™ creates that single window pane to view all logs in one central environment. Now you have one place to search, review, skim, and analyze. No more SSHing into one server at a time or copying files from multiple locations manually.

By aggregating your logs in one location, you can debug faster and even interactively analyze them in real-time with the live tail service. Seek by time, context, and color-coordinate to better organize your files and quickly review issues based on frameworks and languages.

The Papertrail solution’s log velocity analytics answers the question “How often does this happen?” Find trends in your bugs so you can stop them before they become persistent errors.

(Papertrail log velocity analytics)

With quick setup, you can aggregate your log files and create a frustration-free monitoring and analysis environment for administrators who work with 2 or 2,000 servers. It’s the solution for any organization with distributed apps that need solutions quickly when any one of your mission-critical apps fails.

Search history: access recent searches in Papertrail

Posted by @rpheath on

When troubleshooting, a single search can become a theme with variations. Even if it’s not worth saving, it’s worth remembering, for a while. Until today, Papertrail didn’t make getting back to recent searches especially easy, but that’s changing.

Today we’re excited to release a new feature that provides user-specific access to search history.

How it works

There are two ways to access search history in Papertrail’s Event Viewer:

By clicking

The magnifying glass in the search input now toggles the search history view:

Once the search history is opened, click on the search you want to perform.

Keyboard shortcuts

We designed search history to be an extension of the search input, so it felt natural to support a keyboard workflow.

When the search box has focus, press to open the search history. Once opened, the list can be navigated using the or arrow keys. When the right search is highlighted, press Enter to execute it.

When a query is close, but requires a small tweak, it can be tedious to rewrite the entire query just for that little adjustment. With search history, any recent search can be copied down to the search box for modification.

Right now, a search can only be saved in the event viewer if it’s currently active. With search history, a recent search can be saved any time, without needing to load it first. This keeps the focus on troubleshooting, without interruptions to save searches.

What do you think?

Not all searches need to be saved, but that doesn’t mean those searches aren’t important. With search history, they’re only an arrow away.

We hope you find search history useful. Give it a try, and let us know if you have any questions or feedback. Enjoy!

Lightning Search & Log Velocity Analytics

Posted by @jshawl on

We’ve been working on a few new features that will dramatically increase the speed at which logs are searched and enhance visibility into log volume. These features are immediately available to new customers and will be rolled out to existing customers in the coming weeks.

Lightning Search changes the way Papertrail™ stores and searches logs, resulting in a dramatic increase in search speed. This new architecture also enables us to release a new feature: Log Velocity Analytics. Understanding log volume has never been easier. While viewing logs, click on the graph icon, and visually explore log data. The graph will reflect the current search, group or system.

Whether you’re trend spotting over a week, or diagnosing a spike in the last 10 minutes, Log Velocity Analytics reduces the time needed to understand the data in your logs.

Set up remote_syslog2 quickly with official Chef, Puppet, and Salt recipes

Posted by @lyspeth on

Papertrail now officially supports automated setup of the remote_syslog2 logging daemon with Chef, Puppet, and Salt:

All three support Ubuntu, Amazon Linux, CentOS/RHEL, and Debian 9. (A previous version of this post mentioned an upstream issue with Puppet and Debian 9 that has now been resolved.)

The Chef cookbook supports 12.21.x and 13.x, and the Puppet module has been tested with 4.x and 5.x.

Get set up easily by providing:

  • the account’s log destination information
  • a list of files to monitor
  • any desired config options for typical remote_syslog2 operation

If your config strategy of choice is cookbooks, modules, or formulas, try these out. Tell us if you run into any speed bumps.

Hello cron job monitoring & alerts, goodbye silent failures

Posted by @coryduncan on

Papertrail has had the ability to alert on searches that match events for years, but what about when they don’t? When a cron job, backup, or other recurring job doesn’t run, it’s not easy to notice the absence of an expected message. But now, Papertrail can do the noticing for you. Today we’re excited to release inactivity alerts, offering the ability to alert when searches don’t match events.

Set up an inactivity alert

From the create/edit alert form, choose “Trigger when no new events match”

Inactivity alerts

Once saved, the alert will send notifications when there are no matching events within the chosen time period. Use this for:

  • cron jobs
  • background jobs which should run nearly all the time, like system monitors/pollers and database or offsite backups
  • lower-frequency scheduled jobs, like nightly billing

Try it

If you have cron jobs, backup jobs, or other recurring or scheduled jobs, they almost certainly already generate logs. Here’s how to have Papertrail tell you when they don’t run or run but don’t complete successfully:

  1. Search for the message emitted when a cron job finishes successfully (example: cron)
  2. Click “Save Search”
  3. Attach an alert, like to notify a Slack channel or send an email.

No logs? No problem.

Very rarely, a recurring job doesn’t generate log messages on its own. For those, use the shell && operator and logger to generate a post-success log message. For example, ./run-billing-job && logger "billing succeeded" will send the message billing succeeded to syslog if and only if run-billing-job finishes with a successful exit code. Use "billing succeeded" as the Papertrail search.

What do you think?

Give inactivity alerts a try and if you have questions or feedback, let us know.

Green means go(od): Spot inactive log senders at a glance

Posted by @lyspeth on

Ever wanted to quickly see which systems haven’t sent logs recently? Now it’s as easy as checking a traffic light. Visit the Dashboard and click a group name, then scan the list of systems:

Systems with activity status

  • Green-light systems are currently sending logs
  • Yellow-light systems aren’t currently sending logs, but have sent logs in the last 24 hours
  • Red-light systems haven’t sent logs in the last 24 hours

Text on the right shows when Papertrail last saw logs.

It’s an easy way to make sure critical systems are still logging after a deployment or upgrade.

Try it out, and if you see anything unusual, or just want to opine on the intervals or colors, tell us.

Advanced event viewer keyboard shortcuts

Posted by @coryduncan on

Today we’re excited to release two new keyboard shortcuts within the event viewer:

Highlight and link to multiple events

When a series of events is relevant, it can be useful to share those events with teammates. This is now possible.

Holding Shift will put the event viewer into “selection mode.” While holding down Shift:

  • Start a selection by clicking the selection button button next to an event.
  • Select a range by clicking a selection button above or below an existing selected event.

Event selection

The browser URL will update to indicate which events are selected. From there it’s as easy as copying and pasting the link. Clear a selection by pressing Esc.

Retain search query when clicking

Extending the idea of flexible context to the entire event viewer, holding Alt while clicking a link will retain (instead of replace) the current search query. This works for orange and blue context links as well as click-to-search.

Retain search query

To see these and all other keyboard shortcuts, press ? while in the event viewer. Try out the new shortcuts and as always, let us know if we can do better. Enjoy!

Use Zapier to send logs anywhere

Posted by @lyspeth on

Papertrail’s search alerts are great, but what happens when you need a specialized integration, or want to grab something other than raw messages and counts – like particular fields from a message, or data to analyze later?

Now, you can invoke a Zapier action using a webhook trigger, which can then perform any desired action in Zapier. This example Zap setup sends data on printer service behavior to a Google Sheet for later analysis.

Set up the Zap

Sign up for Zapier, or log in.

Create a new Zap. Under Built-In Apps, select Webhooks by Zapier, then Catch Hook, and save the new Zap.

create_zap.gif

In the Pick off a Child Key dialog that appears:

child_key_dialog.png

enter payload.events to get to the event details and Continue to show the Zap’s webhook URL.

Set up the alert

Create a saved search to find the lines of interest, then add a Zapier alert integration. Grab the webhook URL and paste it into the alert, then save.

setup_zap_alert.gif

Process data

Create a Google spreadsheet with column names for the relevant fields: received_at, source_name, program, message.

printer status data

and test the data-sending step by clicking Send test data on the Papertrail alert, and in Zapier, clicking OK, I did this.

send_test_data.gif

Once the test has succeeded, set up the action. Select the Google Sheets app, then Create Spreadsheet Row. Select the spreadsheet and worksheet created earlier, and fill in the fields with selections from the payload.

set_up_sheets.gif

Voila! A spreadsheet that will dynamically update with details from the matched events when the Papertrail alert fires.

If custom alerts going right into your app of choice sounds great, give it a try and let us know your thoughts.

Improved Log and Account Access Permissions

Posted by @rpheath on

Starting today, it’s possible to grant a user access to logs from certain senders/groups (within the same Papertrail organization). Additionally, we’ve added specific permissions for managing users, changing plans, and purging logs.

Here’s an example, where an administrator is changed to have ready-only access to certain groups:

Group permissions screencast

What’s possible?

Papertrail’s granular access control and permissions allow:

  • Companies to segregate access by responsible team, like granting access to logs from a staging environment or a specific product.
  • Consultants and hosting providers to provide limited access to many customers, while still managing all logs themselves.
  • Admins or accounting teams to handle less-common changes like adding users, changing plans, and purging logs.

These new permissions keep access clean within a single organization and may reduce the need for multiple organizations. Some may still benefit from having multiple organizations, or a combination of both multiple organizations and granular organization-specific permissions.

Give it a try

To change permissions, visit the Members section. And as always, we’d appreciate hearing your ideas.

Event actions: flexible context, fast troubleshooting

Posted by @coryduncan on

Today we’re excited to release new ways to act on specific events, including seeing surrounding and related context, copying a deep link URL, and transitioning to the command-line.

Event actions

With this new feature, event actions, one can:

  • Link to an event in the same context you’re viewing. An easy way to share an event with a teammate or save it for reference.

  • Transition to the Papertrail CLI at the same point the event occurred.

  • Show an event within a different context, like a specific system, program, or group. Switching contexts allows quick examination of multi-line events or multi-system incidents.

Because text selection is often an integral part of log-based troubleshooting, the event actions “+” button doesn’t change text selection behavior. Select text as you normally do, even directly over the button, and it’ll work as normal.

Flexible context

In most situations, showing surrounding context means removing the current search query. It’s a bit like zooming out: show me a specific event in the context of all events from a given log sender, program, sender and program, or group. This makes it possible to transition between any set of related logs without losing the specific message you’re interested in.

When you’re certain that all relevant events match your current search, the context links can show surrounding context while retaining the current search query.

Imagine I’m looking at events matching the search format=html. I want to see a specific event in the context of a different set of events that also contain format=html. In this case, I’m interested in seeing matching events from a specific program (app/web.1):

Event actions context

The next time you need more context around an event, try out event actions and let us know if you have feedback. Enjoy!

Click-to-Search: Teaching an Old Log New Tricks

Posted by @rpheath on

Wouldn’t it be great if you could drill down into your logs just by clicking? Today, we’re excited to release a feature that lets you do exactly that. We call it click-to-search.

Here are a few examples of how this will make life easier:

  • Web access logs: Each line in your access log contains an IP address. To click the address to see other lines containing the same address, enable “IP Addresses”.
  • Custom app logs: Your custom application logs contain User IDs with the format “user_id=1234”. Creating a custom clickable element with the regular expression user_id=\d+ would make the string “user_id=1234” clickable.

Papertrail offers the following clickable elements out of the box:

  • IP addresses (enabled by default)
  • Email addresses
  • GUID / UUID
  • Period separated words (domains, file names, etc)

For these items, just click a checkbox to activate the elements that apply to your logs.

Since all logs are different, we’ve made it simple to create your own custom clickable elements:

Click-to-search will improve troubleshooting flow and make log filtering easier. We hope you find it useful, and we always welcome feedback on how to make it better. Enjoy!

Log Destination IP will change December 20

Posted by @papertrailapp on

Summary

Update: The new DNS records are now active.

The DNS records for Papertrail’s first four log destinations (logs.papertrailapp.com, logs2.papertrailapp.com, logs3.papertrailapp.com, and logs4.papertrailapp.com) will change on Tuesday, December 20, 2016. The new IP addresses will be in the CIDR block 169.46.82.160/27.

Important: Papertrail will continue accepting log messages sent to the old IPs.

Does this affect me?

Probably not. Loggers will continue logging to the old IPs until they’re restarted, at which time a new DNS lookup will take place. However, if the network uses IP-based egress filtering, the egress filters will need to include the new addresses by Tuesday. (Very few networks filter outbound traffic in this way.)

What if we aren’t using DNS?

For any systems sending logs directly to an IP address, no action is needed. The log destinations will continue listening on the old IPs until further notice.

Questions

Please email support@papertrailapp.com if there’s anything we’ve missed.