To hear Taleb tell it, the state of being antifragile implies an ability to not just absorb shocks, but learn from them. One example he used was air traffic. The explanation goes, when a plane crash occurs, the learnings from that crash are woven into ATC in such a way as to gird the system against further crashes. This ability to learn from mistakes without falling apart can be thought of as antifragility. A counterexample would be the banking system, where the dissolution of one bank reverberates through the system in a nonlinear manner. Before any learnings can be gleaned, the system has failed, and in so doing demonstrated its fragility.
But if these examples are representative, than surely a more intuitive (and clearer) way to distinguish between ‘fragile’ and ‘anti-fragile’ systems would be to describe their internal interdependencies. For instance, nodes within the banking system are generally interdependent—the failure of one will resonate in some way through many other nodes. Within the ATC system, however, the nodes are more or less independent. The failure of one controller generally doesn’t affect or imply the failure of any other controller. To express the point even more simply, the difference between fragility and anti-fragility is the difference between linear and nonlinear (or even complex) systems.
Oddly, complex systems, while exhibiting incredible robustness and dynamism in the face of shocks, also exhibit intrinsic fragility in the form of cascading failure. A small action can (and often does) lead to an unexpectedly large reaction thanks to linkages beyond our comprehension or control. But yet another set of unseen linkages—perhaps even linkages created in response to the original shock—will generally act to “catch” the system and bring it back to equilibrium. In this light, the distinction between fragility and anti-fragility becomes more a question of timescale than of definition. Is the banking system fragile? In the short term, yes, but in the medium term we’ve managed to bring it back into order, and in the long term we’ve even managed to improve it. Calling something ‘anti-fragile’ then, is just a fancy way of saying two things: first, that a system is complex; and two, that its observer has a short time horizon.
There’s also a definitional issue at play. Taleb steadfastly argues that fragility is an absolute state; otherwise, it could have no opposite. If you were to put the concept of ‘fragility’ on a line, it would exist at -1, ‘resilience’ would sit at 0, and ‘anti fragility’ would come in a +1. The problem with this formulation is that it forces ‘fragile’ into an unnatural box. Fragility is by nature a relative concept—things are fragile only in relation to other objects. Glass may be fragile, but in a world of crystal, we’d probably consider it robust. Put differently, fragility isn’t a state of being so much as a value judgment, and the same goes for robustness. In that frame, anti-fragility is little more than a synonym for robust.
I realize that Taleb attempts to distinguish between things that simply react to shocks from things that actively grow stronger, but I think the distinction fails once you get more than a step or two into the weeds. Whether a system grows stronger or weaker from a shock can’t be known until the nature of the next shock is understood. Stretching a rubber band nearly to its breaking point makes it more robust with respect to subsequent smaller stretches, but also more fragile with respect to subsequent larger stretches. Richard Reid made the TSA more robust to shoe bombers, but more fragile to novel bombing methods (the search for a specific threat often blinds us to unspecified threats). My point is: whether a shock strengthens or weakens a system is less about whether a system adapts than about how it adapts, and one person’s antifragile adaptation is another person’s fragile adaptation. To Richard Reid, the TSA is antifragile. To me, they’re the opposite. So it goes.