Getting the most out of your Website Monitoring Service
Many basic website monitoring services cache DNS – and that’s a big problem. Monitoring that caches the domain name server (DNS) process will not detect many DNS propagation issues. In fact, DNS issues are increasing and relatively common (note the AT&T DNS outages, GoDaddy DNS outages in 2012).
Slow monitoring – checking only once-per-ten-minutes, or more – misses errors and many intermittent performance issues that will have a long-term negative cost. Good website monitoring services provide monitoring frequencies of at least every five minutes, or faster, such as, one-minute monitoring.
Basic website monitoring services don’t capture long-term data and don’t capture detailed data. In the short term, a monitoring service providing minimal data reduces its start-up costs, but their clients suffer. In the long term, good historical monitoring data is the difference between suffering the same downtime event over and over again and your organization’s ability to use data to constantly improve.
It’s embarrassing when the calculation used to determine “100% uptime” that you report to your boss, or to your clients is inaccurate. In fact, the calculation used by many basic website monitoring services doesn’t take into account common factors that impact uptime and downtime, including: planned server maintenance, “working hours” of a system, the start of a new 24-hour period, as well as other custom situations. Dotcom-Monitor automatically includes these factors into the calculation, so your uptime/downtime reports fit your business needs and don’t result in embarrassing situations.
Downtime Threshold Alerts and Adjustments
The effectiveness of monitoring is based, in large part, on whether your team trusts it and uses it. Your monitoring will quickly become useless if it doesn’t account for network “hiccups,” your team’s work schedule, and how your organization defines “downtime.” For example, if your website is being checked from 10 locations and two locations cannot get to your site because of some backbone provider routing issues, then eight locations are detecting uptime without errors. In that situation, do you want to receive an alert? Do you want to receive an alert at 2 am on a Sunday? How about if you just did a major project on your website?
Your answer probably depends on how you’re defining “downtime” at that moment, or what kind of service level agreements (SLAs) you have with your customers. A good monitoring provider will allow you to define your own downtime monitoring alerting processes and thresholds.
Some basic, as well as, name-brand website monitoring services offer “staggered monitoring”. In staggered monitoring checks to a website occur based on the capacity of the monitoring service’s monitoring locations – not consistently, based on the monitoring frequency (example once every three minutes) set by the user.. For example: a monitoring service using a staggered method will promise 20 checks per hour. However, this does NOT mean that your website will get checked consistently, every three minutes. In fact, your website may get checked once at the start of the hour, followed by a gap of no monitoring for 40-minutes because the monitoring locations do not have capacity, and then all remaining 19 checks will happen one after another in the remaining 19 minutes. You’ll receive data showing “on average” response time during the entire hour, but not specific info on exactly when the individual checks occurred. In this example, your your system is not monitored for 40-minutes and is therefore exposed to downtime for 40-minutes. A good website monitoring service will ensure its monitoring locations have the capacity to provide “consistent monitoring” every 3-minutes, rather than using “staggered monitoring.” The staggered method leaves holes in monitoring, has less accurate data, and relies too much on algorithmic averages.
Without good diagnostic tools – an automatic trace-route, error code capture, page screen shot, video capture – the additional time it takes to track down errors is a major cost for IT departments. Many low-level website monitoring services will tell you there is an error – and that’s it. You won’t have info on whether the “downtime” is due to the network, or a web page, or a server, or a load balancer, or third party elements, or a network glitch and so on. You’re on your own to figure it out. The time needed to diagnose and firefight these errors is one of the most costly factors in IT departments, according to a recent TRAC Research report. A good monitoring service provides robust diagnostics and technical support.
You receive an alert at 2 am in the morning indicating your website is down. You get out of bed load your site in a browser and everything works… fine? Alerts are still coming in telling you there is an issue. Is there anyone at the monitoring company who has the expertise to help you? Good monitoring means good people providing expert support, answering questions, and helping you get back up and running asap. The “cheap monitoring” business model does not support a human tech support team. You’re on your own in your slippers at 2 am. guessing at what is happening.
Many users of monitoring services “graduate” from low-level monitoring companies due to false alerts. Many false alerts occur because low-quality monitoring companies have rudimentary “false alert” verification processes in place. As a result, teams relying on the alerts no longer trust the alerts, nor the website monitoring service. Dotcom-Monitor employs the industry’s most robust triple-redundancy false alert verification process involving instantaneous error verifications, network tests, target re-testing, as well as filters that specifically ensure the alerts received are “true” alerts.
IPv6 and IPv4
Many monitoring companies have not put in place IPv6 monitoring capabilities. On the other hand, Dotcom-Monitor has had IPv6-enabled monitoring capabilities in place for several years and has substantial expertise in that area. These additional capabilities ensure companies currently deploying IPv6 – or those considering IPv6 – can successfully monitor IPv4 and IPv6 scenarios with one solution.
Monitoring providers who only provide one, or only a few monitoring services lack the capability to make correlations between various types of monitoring results. Moreover, employing many one-off monitoring vendors also adds complexity and costs to business processes due to multiple types of alerts, data, interfaces, as well as contracts, payments, etc… Dotcom-Monitor has a wide variety of monitoring capabilities all simplified within a single unified user interface.
Bottom-line: Don’t rely on a website monitoring service focused on reducing its own IT costs. Ask questions. Find a monitoring service that reduces your IT costs. A good monitoring goes the extra mile (for example, this list has 11 tips, not 10!) to ensure quality. In order to provide that quality, monitoring companies have to start with what is best for the user of the monitoring services. Ultimately you’re relying on your monitoring company to provide your organization the ability to constantly improve by avoiding downtime, improving user experiences, and reducing your IT costs. And, in fact, “constant improvement” is the Dotcom-Monitor motto.
To learn more about the Dotcom-Monitor website monitoring service, visit: www.dotcom-monitor.com[divider text=”back to top”]