Scroll to top

Buyers’ Guide to Software Trials, Bake-Offs & POCs


When it comes to trying out new enterprise software or toolsets, there’s much confusion between the terms Free Trial, Sandbox, Bake-Off, Proof of Concept and Proof of Value – and if you should pay for any of them or none. We put software evaluation under the spotlight and define the parameters.


Choosing the best enterprise software for your organisation is never easy. It helps to go behind the glossy brochures and slick presentations to get your hands on real products. You wouldn’t buy a new car without a test-drive, would you? The same goes for enterprise software, but it’s harder to test-drive than most cars, because it’s more like Formula 1. Complex software technology can be tweaked to extract maximum performance for a given test, much like Formula 1 cars can be set up for specific circuits. You wouldn’t expect the sales guy to do this before you test-drive a new Merc, but you should with software.

This short guide shows how to get the most from popular software test types.


Software trials are popular for consumer software, where it’s common to offer free product downloads for 14 to 30 days’ use. Free trials of enterprise software are becoming more common, particularly as more vendors adopt SaaS platforms.  Dell and Atlassian are a couple of vendors who offer trials, and Zendeskis another. The last offers a 30-day trial of its on-demand help desk and customer support portal. These companies offer off-the-shelf, SaaS-based modules that perform common tasks in IT environments, with easy download and no questions asked, other than the triallist’s name, company and email address.

It’s a different scenario when you’re looking for more comprehensive or specialised enterprise software. It’s almost certain that you’ll have seen a few demos, spoken or met with a few vendors and discussed your specific needs, before asking for a trial.


Let’s take the case of enterprise software, for instance, off-the-shelf application software for Accounting. If the vendor were to give your engineers free trial access for a few weeks, what would be the likely outcomes?

You might ask them ‘how did it go?’ or ‘what do you think?’ And you’ll get answers like ‘yeah, it went well,’ or ‘It does the job from what I can see.’ And just as often, you’ll get answers like: ‘Sorry, I didn’t really have enough time to give it a thorough work-out. Work got in the way.’ So, what is the outcome for both parties? Zero.

If you really want to put software through its paces, a guided trial will give you far more control over the process and the outcome, and will take far less of your engineers’ time. Here are a few parameters you should nail down:

  • Limit the trials to 3 or 4 shortlisted vendors
  • Select one of your team to run each trial and agree on a timeframe
  • Agree on a formal test process
  • Agree on the format of the test results report
  • Shortlist the key functions of the product or service to be tested
  • Agree on the datasets to be used for the trials
  • Define the outcomes you’re looking for  and how to measure them
  • Agree on how much contact your team may have with the vendor during the trial.

Once you’ve defined these parameters, you’ll get a much clearer and more valuable result. If a stand-out product emerges from the trials, it will help you choose the vendor for a single Proof of Concept exercise (below).


On the other hand, if you’re checking out software tools for your development team, a more common scenario is for the vendor to provide a ‘sandbox’ where your engineers can ‘play’ with the technology to their hearts’ content. If it’s a cloud-based product or service, it’s even easier for both parties. Whether on premise or in the cloud, you want a technical environment with well-defined rules and secure boundaries, a working environment where your developers can build prototypes and test them with real data, without affecting the production side of your business. While the environment is quite different from the trial in 2 above, exactly the same rules apply. You need to agree with the vendor beforehand:

  • The timeframe for the evaluation (and allocate your internal team members)
  • The test process and format of test report
  • The key functions to test (shortlist not all)
  • Datasets to be used, desired outcomes and how to measure them
  • Amount of contact between your team and the vendor.

Unless you’ve defined essential parameters and process for a sandbox, you’ll have the same lack of control and uncertain payback for your time, as for the free trial above.


A ‘bake-off’ is an evaluation involving 2 or 3 vendors, in one product test with defined rules. The bake-off has become popular with enterprise IT buyers for comparing competing technologies, and helping them choose between vendors on the shortlist. A bake-off for enterprise level software can be a useful evaluation and decision tool, if you avoid these common pitfalls:

  • They demand serious technical resources from the vendor, which tends to favour the bigger global players over local ones – and are favoured by them for that reason
  • Global vendors may fly out a product specialist for the bake-off – but that won’t help you evaluate the competence or responsiveness of their local teams
  • The nature of the bake-off usually focuses on functions and performance, which may make it tough to judge practical considerations –  like ease of deployment and integration with your IT environment
  • To run successfully, bake-offs will demand serious resources from you too – so you need to consider this before you decide
  • Some vendors will refuse to participate in bake-offs.

The last point can be a sticking point. If vendors won’t participate in bake-offs, you may think they have something to hide; maybe their product or service is hard to install or use, or perhaps they’re running have to few technical resources to devote to it. But that may not be the reason.


If you rule out a vendor who won’t participate in a bake-off, there’s a chance you could miss out on the best solution for your situation.

That’s because some vendors have been burnt too often in bake-offs, due to poorly-defined processes and outcomes that mean that the technology winner doesn’t win the bake-off.  A common example is where a C-level executive overrides the technical decision for corporate reasons.

Cathy McKnight at the Digital Clarity Group says bake-offs are ‘inherently flawed’ since they’re often used to disguise an existing preference for a vendor, or focus on features and functions instead of outcomes, and because ‘Bake-Offs devalue the finalists by putting them in a cage-match of sorts.’ If you decide to run a bake-off, there are some key questions you should ask yourself, first:

  • Can you clearly define the problem(s) you need to solve, and the desired outcome?
  • Has this outcome been defined in measurable terms such as speed, cost, effort or time?
  • Are you clear on how you’ll share results with vendors (individually or in a group)?
  • In what form will you present the results (in a report or face-to-face presentation)?
  • Are the evaluation process and key criteria clear and acceptable to the vendors?
  • Can you ensure a level playing field in terms of performance?

Ideally the bake-off should be limited to 2 shortlisted vendors. If more vendors are included, it will take more time and resources on your side, and be harder to evaluate at the end.


‘A Proof of Concept (PoC) and “Bake-Off” are not the same thing,’ says Cathy McKnight at the Digital Clarity Group, ‘although many people treat them as such.’

Sure, the term Proof of Concept (PoC) is used rather loosely these days. Originally, inventors and developers used a PoC to prove to stakeholders that a new technology delivered on its promise. Now, it’s become a process for assessing IT vendor performance, and is often confused with the bake-off.

In Putting the ‘Proof’ in Proof of Concept, Cathy McKnight says: ‘a PoC is an execution of project-related tasks that provide evidence that what has been promised by the (singular) preferred technology vendor and/or service provider can be delivered.’

Note that she argues that a PoC should focus on a single vendor’s technology. This raises a key question: do you really need to run exhaustive comparison tests, given the wealth of information now available on the internet – such as Gartner’s Magic Quadrant, analysts’ assessments of new products, product reviews and tests on trade websites, case studies, success stories and more?

In practice, that means a PoC should be of the one vendor that has passed the demo, trial or sandbox and is verified for use in your vertical or general situation. Then, you just focus on the specific tasks the software must perform in your precise environment, which means agreed, specific, project based-criteria, outcomes and success factors that you can measure and contain in a PoC report, and have agreed them with the vendor.


This is a point Andrew Brockfield from AppDynamics makes when he writes ‘… a PoC proves the proposed solution works, a PoV proves it will work for the customer, and that the expected value to be realised is real and can be justified and measured.’

In other words, a PoC should do more than test that a vendor’s product delivers the functionality it claims. It’s an opportunity for you to test how their technology integrates with your IT environment, and how compatible it is with other IT products and services in use. This applies even more so if the proposed solution is a cloud service.

A Proof of Value (PoV) exercise has additional benefits that you won’t get from multi-vendor tests. PoV is a:

  • Great way to get to know your preferred product in-depth
  • Best way to ensure that you can work well with the vendor and his people
  • Better environment for effective user and management participation.

The last point is important, as you’ll get some practical feedback on the product and the vendor you’re about to select. This is almost impossible to do if you’re going through a multi-vendor comparison test. If it turns out that the product falls short of your expectations, or the vendor’s people aren’t as smart as they look in the demonstration, it’s better to find that out before you commit.

It follows that a PoV must have similar but extended criteria for evaluation and reporting as for the PoC.


That’s the final question. Cathy McKnight argues for it, especially for service providers, and Jeff Goldbergat Celent is in favour of ‘paying a fair price for the proof-of-concept phase.’

Goldberg argues that ‘an unpaid PoC is like playing poker without having to pay the ante. You might start off interested in the game but you won’t care as much about your cards. The reality is that a proof-of-concept won’t be successful without both the carrier and vendor working together, regardless of how strong a solution the vendor offers.’ That means that both sides have to have some skin in the game.

Think of the single-vendor PoC as a mini pilot, where you can run software in a particular environment and test specific issues of compatibility with other IT platforms, or performance or stability under peak loads.  You’ve already put the software and vendor through a lot of paces, and there is a relationship already established, based on mutual expectations and trust. Getting over the final hurdle shouldn’t entail a lot of risk, if you’ve followed the earlier stages with rigour and objectivity.

A paid POC also puts you in a very strong position to justify your decision to the C-level suite or the board. After all, a successful PoC will have established the value of your choice to the organisation and proved that the selected product does ‘what it says on the box’.

Related posts