So, we've talked about models, we've talked about price targets from the vendor perspective, let's get into the buyer's point of view.
Software has a supposed value. Your business accomplishes X today through some combination of labor, fixed costs, and recurring costs. Take a reporting system, for example. Maybe you just use Excel, some databases, and some degree of home-brewed automation. So, your fixed costs cover software and hardware, recurring costs for maintenance, and then labor for the amount of manual effort required to design and deliver the product.
A solution to replace your system should effectively pay for itself by reducing long term costs and/or saving effort. Pretty obvious, right? But how can you be sure? The system you built in house might have low one-off costs, albeit offset by higher labor demands. You could quite possibly spend the same money on a new system or on adding more human resources. Adding people, while not easy, is on some levels much less risky than taking on a new system.
You could bring in consultants to try and assess deeply that value of buying another system, but this too is fraught. Consultants like change. Change and uncertainty is what keeps consultants in business. The consultant might even think that his or her company will get a piece of the action when it comes time to deploy the new system. Similarly, an in-house assessment is tricky because of vested interests in the current solution. Your purchase of a new system potentially means eliminating jobs, just like putting a more advanced machine into an assembly line.
All you want is the cost-benefit analysis. The decision to buy a software solution just needs to add to the bottom line over a reasonable amount of time, without distracting your resources from higher-value activity. But you can barely trust anybody to give you a straight answer on cost-benefit.
To get to the root of the matter, you can attempt some thought exercises. Would you buy at a million dollars? 10 million dollars? Would you take it for free, just knowing that there will be effort to implement the new system? Try plotting a function of the cumulative costs of your options and, if they intersect, see how many years it might take for your purchase to add value.
For example, your current solution can be modeled as a function of time. For year X, it's something like:
f(X) = (1 + X) * ((human labor + annualized hardware costs) * uncertainty)
Human labor is, of course, the cost of the people to build and maintain the solution. For something built in house, it's likely to be quite high. You might have several fairly expensive developers plus maybe an offshore QA or integration consultant. You will need more hardware over time, but you can cap ex that. Uncertainty is a coefficient that recognizes the non-scalability of your solution: developers leaving, new requirements that are not in scope, and so on. You can experiement with different values, but try something like 1.2, as +20% is kind of a standard fudge factor.
What about buying? There area few more inputs. Something like this:
f(X) = software + hardware + implementation + (1 + X)*(maintenance + human labor)
Ideally, there is no uncertainty coefficient because your vendor is competent and the solution is a good fit. Your human labor expenditure should be substantially lower. So, this function crosses the y axis at a much higher point, but has a lower slope (in theory). The question is, how many years does it take for this function to save you money?
That's something I'd like to know more about. Is there an accepted number of years within which a business decision needs to add value? This could apply to many kinds of major expenditures--equipment, office space, marketing, etc. Cap ex rules inform on this somewhat, but I'm inclined to think that there are less codified rules out there that experienced executives use to make this type of decision.