The changing value of value

Free-Value

The old adage “Free has no value”, just doesn’t seem to be holding up anymore.
When I was running my startup SMAK, I was plunged head first

into the world of Freemium SaaS services. Running a Social Media analytics and management technology, I knew my product and my competitors were to have some sort of Freemium to Premium business model.  Since, the tools we supported (Facebook, Twitter, LinkedIn, Email) were all free to begin with.

Now that I’m back in Enterprise ITSM, I’m shocked by the number of products selling Freemium to Premium offerings.  SysAid, SpiceWorks … the list goes on.

I guess on one hand I shouldn’t be.  Small IT shops need support, probably more so than larger shops.  So giving away the base product to re-coup fees on support and up-sell makes sense.

However, I’m talking about Enterprise ITSM.  Large distributed IT, support groups with complex infrastructure and critical SLA’s…
At lunch this past Friday just such an organization told me that they are looking at Managed Engine, why…. “Because it’s free”.

What?   When did Managed Engine become free? Apparently March 5th (According to TechCrunch: http://techcrunch.com/2014/03/05/zoho-seeks-to-disrupt-it-helpdesk-market-by-offering-servicedesk-for-free/)

This got me thinking… is Free of Value.   Did all of a sudden we accept that any software is going to have an implementation and configuration cost, so why not save on the licenses and spend on the services?

It’s an interesting strategy, but is it sustainable.
Well I guess it depends on the strategy of Managed Engine.

As the TechCrunch article related, the ServiceDesk product is one of Zoho Inc’s most profitable product. So why would they start giving it away for free?

I can only imagine 2 reasons

Possible Reason #1) Their ServiceDesk product is only marginally competing in the Request and Incident Queue management space.  Yet, their plethora of other tools are finding tactical footholds in are of Event,  Configuration, Problem, and Change.   Thus providing their product free will keep them entrenched in their customer base enough to capitalize on other areas.

If this is the case, I sort of like this strategy.  Let’s face it, Incident and Request have little value in and of themselves.  So if you are not spending on areas that are not strategic and low value, and re-allocating your funds to higher value areas like config & automation, then it is actually a very sustainable strategy.

Possible Reason #2) Their ServiceDesk product is keeping customers engaged, while their other products are waining.  Support is not Free from ManagedEngine, so having more customers on support contracts can increase product value and the residuals can be invested in functionality expansion.  Thus allowing them to transition potentially point product customers to a more robust platform.

I’m not sure I’m buying this second strategy.  It would make the assumption that the current ServiceDesk platform is able to complete with the likes of Cherwell and ServiceNow.  While I’m not familiar with ManagedEngine as a user, I’ve certainly come to know a lot of customers who have moved on to these platforms to achieve more robust integration.

So how about the IT companies that buy these Free products?

The question really comes to what are you Buying?
One could argue ServiceNow is free and you are simply paying for Support, (Hosting, Backup, Security, Backup, etc…)

This cloudy, sassy world has made it very difficult to exactly measure value.

There are 3 things to remember though no matter what option you go forward with:

1) Are you leveraging this platform for Queue management or for Business Agility?
Any dollar spent should have a return on investment, and remember nothing is ever free, you are just choosing to spend someplace else.

2) Are you deploying this solution to support IT or support the Business?
Yeah, yeah, we are all the business, but is this a department tool, or product to help support service delivery.

3) Is the product delivering the capabilities you want one to two years from now?
Your ITSM products capabilities should far exceed where you are today maturity wise in your processes and in your capabilities.
A fool with a tool is a fool.   A fool with a foolish tool, is a foolish fool who hurst themselves and others.

What say you?   Do you think free tools are a great idea or a bad idea?
Hit me up on twitter:  @vigilantguy

Re-Post: September 2008 Service Outage Avoidance – The mother of all metrics

pooper-scooper

In my role at Vigilant (my former ITSM consulting firm) as both a consultant and an executive I have had the opportunity to interview hundreds of operational IT managers and directors. In most cases the number one metric they were managed by was “Availability” or “System up-time”.
What turns into a very interesting dialogue is when you talk to them about how they collect those metrics, report those metrics and respond to those metrics. Here are the shortfalls of taking this approach:

  • Up-time – this is very rarely measured from the End-users standpoint. So you are immediately putting IT on the defensive when you state the system was available on the network, but the end-user is not able to execute business on the system.
  • This reported metric only gives credibility to how quickly IT personnel was able to find and fix the outages. Outages are typically caused by poor release practices or change management, IT functions, anyway.

A new approach that should be considered is how I measured my operation as an IT director and what Vigilant consultants call “Service Outage Avoidance” (Not to be called SOA or real confusion sets in)
This metric is the marrying of component availability to end-user availability. You can accomplish this by monitoring a systems network & server components for availability along with the end-users behavior. When an outage occurs at the component level, yet the service stays up to the end user, due to you superior availability design of the system, you have achieved avoiding a service outage.
Availability metrics then should be broken into the 5 following categories:

  • Network      (Link status, utilization, drop/error rates)
  • Server         (OS stats, CPU, HD, Mem)
  • Application  (DB, J2ee, .Net, etc)
  • Business Logic    (Code interfaces, Connectors, ETL, etc..)
  • Business Process  (Transactions, order counts, etc…)
  • End-User      (real-time screen to screen, refresh, errors, etc..)

“Service Outage Avoidance” metric shows the percentage of downtime of a component where end-user was available.  (i.e.  4months of aggregate downtime of SAN on Email system during 12 months of end user availability)
Your next management report then will show something like this:
Email Services – Service Outage Avoidance: 25%
What this metric means is that we had an impact at a component level of 25%, but due to proper design and management we avoided having a business impact.
In other words, quantify and validate the spend on fail-over, redundancy, and proper architecture.

If you can equate the up-time value against this, you can calculate the ROI.
i.e. Up-time value of Email for 1mos= $1million dollars.
Cost of redundancy $1M
1 year ROI is 300%
(4mos *$1M = $4M return – $1M investment = $3M. $3M(return)/$1M(Investment) = 300%)

Re-Post August 2008: Customers – Are they always right?

customer-rightSo you finally got a chance to look at your customer surveys. Disappointing aren’t they? After all the coaching, training and process for your  support desk, your customers are still complaining.

What’s the deal? you say to yourself. “I thought if we put this Service Management process stuff in place customers would be happy, at least that is what Matt told me. Last time I listen to that idiot.”

While avoiding my advice is many times a good thing, it’s not that the processes that failed to deliver.  The failure may have been your implementation not recognizing it’s Customer Service Management, not Process Service Management. Realizing this means  you have to take into consideration the customer’s unique circumstances.

So while Customers are not always “Right”, they can be made to feel like they are getting the service they are paying for, by listening to them and acknowledging their intelligence and frustration.

Too often organizations slap in Incident and Request models that are purely focused on the activity workflow, and not the communication process. The customers needs to feel like they are getting attention individually, while a process helps resolve the issue or fulfill the request. Let me share an example with you.

Recently I purchased a VHS to DVD copier. Upon making my 3rd or 4th copy successfully the device stopped reading the VHS tapes. It was snowy on the screen but with clear voice. I called tech support, where upon the technician following his trouble-shooting steps told me that I needed to plug in the “Yellow” cable from the back of the device to my television.

I explained that I was using the component cables which were Red, Blue and Green and that my TV did not have a place for a “Yellow” cable to plug-in. The technician insisted that the device could not work unless the “Yellow” cable was plugged in. Mind you I had told him several times that I had successfully made copies, and that nothing had changed with my physical connections. Needless to say this was a horrible experience, and I ended up returning the product all together.  This support rep failed to listen to the customer.

In a different call to my Internet provider Comcast, I had completely the opposite outcome. After troubleshooting why my connection was not working, I finally called Comcast.  Upon plugging my laptop directly into the Cable Modem, I realized that the problem was with their Cable Modem. So I called the Comcast tech support. I explained my situation and the steps I had taken. The Comcast rep told me “Can I put you on hold one moment, you clearly have taken some steps to isolate this and let me see if I can pick up where you left off.” Literally within 2 minutes the router was up and running and my Internet was back up. He apologized for the inconvenience and then explained to me that they would add my device to their monitoring solution so that they would be notified again should this happen.

Now that is Customer Service.

Re-Post: June 2008 SLA’s why are they needed?

car salesman

Service Level Agreements - SLA’s. For those who have been able to develop them with metrics that are meaningful and achievable, theylove them. For everyone else, they are a nightmare. What makes a good SLA? In my experience, it takes very little to make a good SLA. First, it needs to be understandable. If you don’t understand the commitment of a service that you need to perform, then the actions you need to take to improve will be a mystery.
Let me demonstrate; what if a person at a fast food restaurant was to get measured on the quality of their hamburg?  That is good thought, but what does it mean? It’s not like they can change the type of beef or bread or other things like that, those decisions are made above them. So rather than just telling them to improve the quality, the manager needs to put in measures that the employee can affect. For example: Time on the shelf less than 10 minutes, bread no older than 5 days, etc…

Those are factors that the employee can watch and adjust, ultimately improving the quality and meeting the SLA. This brings me to the second factor. It has to be measurable. If you can not measure it you can not manage it. If you can not time the hamburger on the table, the age of the bread, you can not determine it’s indication of quality.

If you look at the standard SLA’s in place for most organizations, they are things like 99.999% up-time (aka 5-9′s). If you asked most IT folks what that meant you would get different answers. Some would say the server is up 99.999% of the time over the course of a year. Others might say 99.999% would mean that application services are available to all users for no less than 15 minutes in the course of a year.

1 is very measurable, but not of high-value. The other is extremely valuable but difficult to measure.

So when establishing your agreements to the level of service required it is important to determine 2 things:

A) What you can provide in terms of measurement and control.

B) What the business needs to operate profitably.

Once you have those two factors, you can negotiate the middle ground. The more the business needs, then the more IT will need to deliver and the higher the cost. Over promising on an SLA that the IT department can not hit does not help anyone. So it is crucial for IT to establish what their capabilities and resources look like. My next blog will be what a Service Catalog is and why it is needed to have true SLA management.

Picture courtesy of: Credit © Jeffry W. Meyers/Corbis

Re-Post June 2008: The Art of Triage

To many, troubleshooting seems to be a gift that either you have or you don’t. For example: My father was a mechanic. When he owned his own repair shop he

problem management

Trouble shooting root cause

would hire young guys who would spend hours troubleshooting problems.  Within minutes of my old man getting involved, he would quickly diagnose the problem. No matter what it was, the timing belt, carburetor issues, etc… , he was quick to pinpoint the root cause. Inevitably once the problem was found there was this “oh, of course” response from the young inexperienced grease monkey.

I was too clumsy to be a mechanic, so my dad fired me and forced me into Computers. However, I didn’t forget what I had learned about troubleshooting.

First, troubleshooting is not something you are born with. It is a skill that is harnessed based on 3 common factors:
1) What you know
2) What you don’t know
3) What you are learning

When you piece these 3 factors together you create the framework for discovery. By adding a negative and positive response to the artifacts you discover, your approach will then lead you down a path of what good troubleshooters simply call “the process of elimination”.

Do you know what is working? Do you know what is not working?
What don’t you know is working? What don’t you know is failing?
What have I proved with this step? What have I disproved with this step?

So when it comes to troubleshooting complex systems, the same principle applies. You just need to analyze them in layers. Here are the layers that VIGILANT (2013.07.18 Update: Vigilant was my old ITSM solutions company)  has documented as the logical points to eliminate.

Infrastructure: Hardware, Networking, Operating Systems
Application: 3rd party application services
System Interfaces: Connectivity between dependant systems
Business logic: Business rules that cause transactions to operate differently
Business Process: The way the end-user is executing the transaction
Business Service: Dependency on data or other elements for success

For really complex issues, take each of these tiers and apply the 3 principles of discovery to them and you fill find the problem is not as much as a black-hole as you thought it was.

Re-Post from May-2008: Why are we still fighting fires?

“We have spent so much money on monitoring tools and consulting, why are we still fighting fires?” I heard this from a recent prospect. The simple answer is because we live in a complex and high demand world. As IT professionals we are trying to do a lot with a little; little training, little vision, little strategy, little focus. Organizational process alone is not going to cut it. If we want to fight fires, then we have to have a campaign like Smokey the Bear: “Only you can prevent forest fires!” was his motto. What is yours?
How about these? “Only you can prevent outages from unplanned changes!”- By implementing some more rigourous controls around change and release management.
“Only you can prevent unnecessary downtime by distracting priorities!” – By implementing better incident management procedures, you could avoid the “SWOT” call mentality that takes the engineers attention off of restoring the service and diverts it into CYA for mangaement.
“Only you can prevent business impact from IT service failure!” – By properly planning and validating your infrastructure through capacity and load testing, infratstucure validation for fail-over and applicaiton profiling, you can ensure your build and release managment processes are catching issues before your users do.

Why are you still fighting fires? Probably because your processes are like disconnected piles of twigs and your culture is overly reactive.

This is a repost from my old blog May, 2008 : http://getvigilant.blogspot.com