One of the subtexts at Innovate 2014 was the need to balance speed with trust, if you want to employ Agile practices at scale. Speakers often added "trust" to the "innovate@speed" mantra and, for instance, I met practitioners of the SAFe framework for Agile at Scale, at the associated technology exhibition.
Of course, as usual, the key issues around trust are social and cultural as much as anything to do with technology. People working in safety-critical areas, and on multi-location cross-governmental programs expected to last for years, distrust the anarchy espoused by some developers who claim to be 'agile'. At the same time, real Agile practitioners worry about "the heavy commercialised thrust and the recurring failure patterns such as the blind usage of heavyweight case tools" (a quote from the book SDLC 3.0, which I'll mention later) often associated with attempts to impose rigour, design and architecture on Agile practices.
In fact, the best Agile practitioners are more disciplined and focussed on business outcomes than most conventional developers; and following a heavyweight CASE tool blindly doesn't guarantee business outcomes anyway. On the other hand, no development approach is proof against poor developers who follow it without thinking; and the best developers can probably make just about anything (no matter how badly thought through) work.
So, in a Development context, what is this thing called "trust"? Well, I think that it is all about confidence in the practical operation of the automated business outcomes being developed for the business. This implies the involvement of all relevant stakeholders in the development process and the existence of trusted feedback loops from business operation to system design and maintenance, as a necessary part of engendering trust.
That sounds like DevOps at Scale to me; and it provides a useful yardstick for balancing agility (speed) with controls. If any controls don't materially increase the practical trust placed by (informed) business stakeholders in the business systems being developed, then they are 'waste', and you shouldn't bother with them.
Analysis and design processes, such as building models of what will become coded systems, are often thought of, by Agile evangelists, as 'waste', because models and requirements and so on aren't the end code that will be delivered; but since most business users can't read the code (and wouldn't want to), a model that they can understand (and which has strong links to the code) increases the trust they can place in the system being delivered. If you have half-a-dozen programmers co-located with a dozen users and delivering useful code the users can comment on in real time, then you might not need the models in order to engender trust; if you are an enterprise running Agile@Scale then you probably do (although you definitely want Agile modelling processes that discourage building models for their own sake—remember that anything that doesn't materially contribute to the stakeholders trusting in what they'll get, should be got rid of).
Another aspect of trust, is trust in process delivery in itself—in knowing what resources are needed for development and whether the business will get its new business system (or a minimally viable—useful—subset of the system) when it needs it—and, if delivery is going wrong, whether anyone will notice in time to do anything about it. Much of what we talk about as Agile etc. today is all about attaining this trust in delivery. Nevertheless, as Mark Kennaley points out, in his book "SDLC 3.0 - beyond a tacit understanding of Agile" (which, by the by, really needs a good subeditor's attention, although that doesn't affect the content), which I was given at the Innovate Management Summit, a lot of this is based on a quasi-religious belief in the Agile Manifesto (or some interpretation of it) and, quite reasonably, "this has sometimes caused frustation among IT executives for whom "trust me" rings hollow" (op. cit.); always remembering that such executives are required (by company law) to exercise due diligence over the spending of shareholders' or taxpayers' funds on big software-based developments.
I'm not going to summarise SDLC 3.0 here but, in part, it uses Systems Theory and Lean to provide an intellectual basis for trust in agile processes—and to provide a trust rationale that can be, if necessary, communicated to sceptical third parties, in a rational discussion. Although, for most us, that implies putting some trust in Kennaley's maths. Of course, all stakeholders, from the lowliest programmers through the highest executive to the lowliest business clerk, have to have trust in the parts of the systems development lifecycle (SDLC) that affects them specifically—and this involves, in general, sensitive and skilled people-management.
You don't attain trust by proving that something works, using fairly abstruse maths, and telling people: "follow this—or you'll no longer have a job". You need to persuade and mentor people. I find it interesting that when Kristof Kloeckner (GM of Rational Software) is talking about balancing speed with trust, he is also talking about 'steering' the development towards desired outcomes. Many practitioners are wary of analysis and design (some with good cause, having experienced 'analysis paralysis' rather than useful analysis and design, in the past)—so, if they are happier thinking of steering the development towards the desired business outcomes (which will involve simulation, modelling, prototype designs and most of the usual design and analysis tool-kit) that is fine by me. The use of the term "steering" emphasises, for me, that a design is not an end product but a stage in the journey to a useful business experience.
Is this all there is to trust? No, trust in development process is necessary, but not sufficient. You need to be able to trust in the identity and accountability of the people accessing and maintaining your systems and data, for example, so good security and access and identity management are also cornerstones of trust.
However, once you can trust development of automated systems and the accountability of the people involved, then (in conjunction with trusted 'actionable insight' from big data analytics), an organisation can achieve freedom. In this context, freedom is the freedom for employees—people—to work they way they want to, using the tools they are happiest with, as long as this contributes to the business outcomes the organisation wants. Freedom is also the freedom of the organisation to exploit new and changing business opportunities, confident that this won't bring unwelcome surprises…
In essentials, an automated business system is a "complex adaptive system" (using Kennaly's terminology, from his SDLC 3.0 book). The freedom to steer this complex adaptive system so as to deliver desired business outcomes as the business environment changes or expands, involves visualising an evolving system scope (in effect, a logical model of the system), applying trusted knowledge and trusted feedback (actionable insight from the people involved with the system; and from the external behaviour of the system, represented by the "big data" it generates), eventually delivering a new "realised scope" (in effect, new software and business processes).
And the bottom line? Well, at Innovate 2014 we heard practitioners from the aerospace industry and even the nuclear industry reporting successful use of Agile processes in development of highly-engineered and safety-critical systems. Actionable insight and trust, working together, give them the freedom to "innovate@speed".