Skip to main content

Complexity is a pharmakon - Conversation with Jabe Bloom - part 1

Jul 13, 2022

We talk with Jabe Bloom about how building software in enterprise is a philosophical argument and how knowing some philosophy can help. What is a pharmakon, and what is the Goldilocks space of complexity. Technical debt as an example of how deterministic systems can have agency within a socio-technical system. The origin story of Jabe’s model of the three economies and how it helps budgeting complexity and how it is a key to building successful platforms. Jabe currently works as a Senior Director in the Global Transformation Office at RedHat. Jabe has been transforming and researching the organizational dynamics and interactions of management, design, development, and operational excellence for over twenty years as an executive, academic and consultant. Jabe is also in the final stages of writing his PhD dissertation at Carnegie Mellon in transition design.

Hosts are Kristof Van Tomme (CEO & Co-founder, Pronovix), and Marc Burgauer (Principal Consultant and Co-founder at Contextualise).

Also available on Anchor and Google

 

Transcript

Marc:
So Jabe, how did you come across complexity?
Jabe:
I think I actually, in high school, I started reading chaos theory primarily which is an interesting kind of sub-variant of complexity theory, right? Like chaos theory is actually deterministic complexity theories. Right? So my favorite kind of version of it is to talk about like double hinged pendulums, for instance, like a single hinged pendulum. So it’s an interesting way to kind of start talking about phase spaces, right? So single hinge pendulum has a phase space that’s a curved line, and that the phase space is just like all the possible places you could find at the end of the pendulum and they’re in a curved line. And if you know the release of the pendulum, you can probably determine roughly where on that line at any point in time in the future, the pendulum will be, right. But if you have a double pendulum, which means you have a pendulum hanging off of a pendulum, it becomes a chaotic system. And it’s because the phase space is then defined by all the possible places the second part of the pendulum could be in, and because of the double articulation it’s significantly broader in space. And also because it can move in a retrograde manner. In other words, it doesn’t just swing back and forth. The double pendulum can go backwards while the first pendulum’s going forward. So you get retrograde motions. So it means that you can’t also, you can’t predict where the pendulum will be, but it’s deterministic in that you can predict that it will be within the phase space, right? So it won’t ever leave the phase space, but you won’t necessarily be able to predict where it’s in phase space. So I think those are interesting, like starting ideas for this stuff. And then from there, a lot of the stuff that I did has more to do with kind of philosophy and implications of complexity within philosophy and ontologies and epistemologies. Then eventually I think, like a lot of people in our community, I ran into Dave Snowden and worked with him reasonably early in Cognitive Edge. I’m not the earliest by any stretch of imagination, but I’ve been working with their ideas for an extended period of time. And then of course from there I quickly kind of… Dave used to have on his website a list of books. And one of the things that… I guess one of the reasons I really ended up like super engaged in complexity theory to the level that I am right now is because Dave did this double thing for me. Like he made me think about applied philosophy. So like, ‘Oh, the stuff that I’d been reading, I could actually use at work as opposed to just reading philosophy.’ And then he also had this amazing collection of books, which included Alicia Juarrero’s book, Dynamics In Action, which for me, I think I read like maybe, maybe once a month, no, that’s once every year or two, I read that book and it’s just every time I go through it, I find more there. And that’s been like just an incredible experience to kind of be able to work with her ideas and try to understand the implications in organizational theory and in design theory. So I dunno, does that answer the question, kind of?
Kristof:
So you mentioned Alicia Juarrero. I had a similar experience, I think. The experience I had when I first met this material it was kind of like this ‘Turn the world inside out.’ Like you have these cartoons where well, no, that’s kind of gruesome, but you know, where like, people are just, ‘Oh my God, this just flipped around.’ Did you have that the first time you read it, how did it evolve?
Jabe:
So I think there’s a couple things with Alicia’s work for me. So I, you know, a lot of the work that I did in college was, was around Heidegger. And so there’s several different kind of things that end up showing up in relationship to Alicia’s work where she starts trying to tie together things like hermeneutics and you know, the loops and self potentiation and things like that, into more kind of scientific theories at the same time. So for me, part of the mind blowing part of working with her stuff is that she is able to look at kind of like continental philosophy and science epistemologies and, you know, more rigorous analytic philosophy and kind of pick and choose among them and still tell this kind of incredible story. So her description of complexity is complex, right? It is about the relationships between bunches of different ideas, I think. And I think to me reading her work partially you know… I have problems with rabbit holes that–Marc might know what I’m talking about–but so for her, for reading her has been a very engaged activity for me just because, I mean, she goes into Aristotle and then she goes, you know, she just, it’s such a broad palette that she works from. And I tend to want to go and read it in the original sometimes and try to come back and say, ‘Okay, so let me read it again and see if I get it a little bit better this time.’ So the mind blowing part to me has always been like the way that she’s managed to work all those things together. It’s really interesting to me.
Kristof:
I think this also is what makes it a little bit harder. Because you’re a philosopher, so you have that background. For me, a lot of the words like epistemology I had to go and look up, I did not know what that is. Do you get that same kind of… or that was easy for you?

Jabe:
No. One of the things I would say about like philosophy for people who come to it for new, is that… Like, there’s like epistemology. There’s not an agreement in philosophy on exactly what that term means. There’s not an agreement on what ontology means, right? Like you know, you can go read Heidegger’s ontology, and people will tell you that his ontology is a metaphysic. And Heidegger’s entire project was disassembling metaphysics. So, you know, there’s that part, I think is something that people should be less afraid of when they approach work like Alicia’s work. I think, you know, one of the things that I love about Alicia too, is Alicia, you know, Alicia taught in a community college, she taught normal everyday people and the language in the book actually kind of flips back and forth between those very technical things, but also like lovely little examples that are very, kind of easy to play with in your head. And so I think that’s, that’s another thing I really like about reading her work is that it’s a nice invitation into some very difficult things to understand, but I think she does just a lovely job of kind of opening up the possibility that you could understand those things to you. And I think that’s great.
Kristof:
You are also working in software, you work at RedHat. How does this play in your daily life? So you said earlier already that it was really exciting that you could start using philosophy in your work life. What should I imagine with that? Like how does that go exactly.
Jabe:
I think, well, I think again, you know, there’s different parts of philosophy that I use regularly, so, you know understanding of rhetoric and understanding of argumentation and understanding how to build arguments. I think it’s just a hugely important aspect of consulting. Right? So there’s that side of it, but the other side of it tends to be more like when I kind of consider to be like model building or pattern finding and working with others to do that. So I grew up as a kind of a, I read philosophy while I was a CTO and a chief architect. A lot of the work that I was doing, you know, was around like trying to understand the world and then write down what the world was like in a very particular language, like maybe Java. And to me, this is not far from a lot of kind of philosophical work that you might be doing. Like, you need to understand the world, you have to write it down in a particular kind of argument. And it has to compile correctly, like it’s gotta fit through these logic gates that you need it to fit through in order to be logical or reasonable or whatever. And so I think there’s a play back-and-forth there where in software, it’s about kind of that model building and understanding the world. And then from there talking to executives, it’s about helping them understand how to create models about the world, how to create narratives and stories that help people to engage in the type of work that they want to get done, right. To build, you know, the term that I might use as a “commons”, or a common ground. And so like, you know, you can see things like the three economies that I talk about is to me, it’s a philosophical argument about how to build software in an enterprise. And it was built for that purpose. The purpose of talking about those things is to help people understand the relationships that they see inside of an organization in a way that they may not have seen them before. And I think, you know, when I work on that type of stuff, especially inside organizations, I always… Eliyahu Goldratt has this idea where he says like… The best models, the best versions of these kind of philosophical arguments, when you say them to someone, they should go, ‘Oh, that’s, that’s obvious. That’s just… Why didn’t I think about that that way before.’ And so, you know, that’s the work of philosophy to me. Sometimes I talk about the idea of just waking up to what’s already there. Like it’s just that you haven’t seen it before. It’s right in front of your face. And then someone comes and reveals it to you and you go, ‘Oh, right. Like that’s not,… yeah.’ So that’s the type of work that I really like to be able to do with people. And you know, I try to do it as a consultant and as a software engineer or architect with people all over the world. So I’m blessed that people are willing to listen to me, rant about these ideas.
Kristof:
One of the big things for me was realizing that software systems… Well… I’m feeling in the dark because I’m still building up my vocabulary and my idea space, and I’m probably less rigorous than you and Marc are in the models that you’re creating and how you think about things. But I’ll do my best. So one of the things that was a huge a-ha moment for me was to realize that the software we are trying to create is deterministic systems, that we can actually predict what’s gonna happen. And then the same mindset of trying to program the system and to make it highly efficient and predictable, that we’ve been taking this into organizations and there it just completely blows up. And for me the, this was one of the key points that I got out of this constraint thinking. Is this something that is today already… How do you deal with that in your work life? Because business owners–C-level–often think in ways of, you know, ‘We need to control this and we need to make it predictable.’ And it kind of like embracing the complexity is not something that comes easy to a lot of people. How do you deal with that?

Jabe:
I have like a couple different thoughts when I hear you talk about that. So the first one is, how do you deal with complexity or how do you deal with people’s desire for determinism? And so there’s two things that come to mind when I think about that. The first one is like, I love this phrase, it is from my friend Shafer. And he says like, ‘People want the assurance of mediocrity over probabilistic excellence.’ They’d rather have assured mediocrity than probabilistic excellence. And they choose it all the time. And you know, this causes lots of problems. And there’s actually, I think some reasonably good psychological versions of this, that you can get of ambiguity-aversion, and decisions under uncertainty. And that the idea here is that like, people will pretty regularly choose a bad plan over no plan that has potential. So like, bring somebody something, you say ‘Here’s a plan and it might make us a hundred thousand dollars. We’re not sure it will work, but it might make a hundred thousand dollars. But we have an idea that could make $10 million, no plan.’ People will choose the planned hundred thousand dollars thing that might fail because there’s a plan. And there’s less ambiguity and they don’t like the ambiguity and they don’t like the uncertainty. So I think there’s something there first of all. The second version of it would be just this idea that.. In philosophy it’s called a pharmakon. In pharmakon, you should hear the word pharmacy like medicine and a pharmakon in philosophy is the idea that there’s certain things that if you don’t have it, you get sick. Like if you don’t have the right medicine, you get sick, but if you take too much of it, it will kill you. Right? And complexity is a pharmakon. No complexity, like the inability to work with any form of complexity assures you kind of stagnation and in any marketplace death, right? Like you don’t have any options. So like, complexity is where novelty comes from and innovation comes from, it’s where new things happen. Because that’s complexity. You know, sometimes I try to say like, complexity is about the future, more than about the presence. It’s like… your ability to engage in it is about engaging in potentiality. And so if you don’t have any complexity, you end up dead. If you have the right amount of complexity, you have to take the right amount of medicine, it can make you healthy. It can make you strong. And if you take too much of it, you get drunk on it. And eventually you black out on it and die. And I think part of the problem here is that in a lot of situations, you get the problem of that missing middle idea. People only see the extremes. They want to either be highly deterministic or they want to avoid complexity altogether. So both of them push you towards simplifying systems in a lot of ways, without ever recognizing that kind of missing middle of like, there’s like a Goldilocks amount of complexity to have in an organization. And how do you get the Goldilocks level in there? And you know, some of the stuff I talk about with the three economies is about that, like complexity, investing your complexity across the organization in the right places, some places the complexity will pay off more than in other places. So have you thought through that and can you think about how that works? And then the last thing I’ll just really quickly say is around like software being deterministic. And sometimes I like to say like software is a text that’s being read by three agents. Like it’s a body of writing that’s being consumed by three agents. And the three different agents are, the software engineers, like the team that’s actually writing the text and reading the text and modifying the text. And so there’s a lot of work inside of what we call software architecture about improving the legibility of the text. Like, you know, there’s pattern languages and there’s different ways of thinking about writing things and even different, you know, forms of language, like the difference between object orientation and function-based languages. And all of those are about the making the text, the code accessible to a human agent that’s consuming. The second consumer of the text is your end user, the person who actually ends up using the system. Now they’re not reading the text directly, but the text is materializing itself into this software system that they’re interacting with. And so you know, that ends up being tested and thought through, by different kind of, you know, ways of testing. And we end up having unit testing and systems testing and integrations testings and all of these things, because those are about making sure that the text aligns with the expectations of a consumer, the end user. Yeah. And then last agent is the CPU. The CPU reads the text, right. And this is the deterministic version that you’re talking about. So the determinism is that the CPU has to be able to understand it. Yeah. Now that means that you have two social systems interacting with a deterministic technical system. It’s a socio-technical system. It’s a socio-technical system that’s mediated through text. And the text is the way that the social system, which, you know, have a tendency be a lot more complex in ways other than you would think of a deterministic system or the CPU. So, you know, at the end of the day, the CPU doesn’t see the text and the interactions as being complex, it’s just doing math. So in this way, like you get this really interesting idea, I think, about the way in which I think that often complexity is. I think there’s examples of complexity in nature. But let’s put that aside for two seconds. In artificial systems like software, the complexity we see there is because of the entanglement of human cognition in the deterministic system. The deterministic system outputs something that has no meaning to the CPU. It only has meaning to the engineers and the end users, and the engineers and end users are not deterministic systems. They’re non-deterministic systems, they’re complex systems. Yeah. So anyway, I think it’s just like part of when we talk about socio-technical systems, part of what we’re trying to do is saying that the activity of software engineering is the joint optimization of these three spheres of agency, the agency of the CPU, the agency of the software engineer and the agency of the consumer. And that’s, it needs to be jointly optimized. You can’t optimize simply for the CPU. You can’t optimize simply for the engineer. You can’t optimize simply for the end user. And I think in a lot of organizations you know, the kind of like design-led organizations tend to over emphasize the customer and underestimate the engineer. Engineers can frequently over-emphasize the deterministic parts of it and underestimate the, you know consumer parts of it. So there’s these weird, like lack of joint optimizations that happen. And that’s, to me, part of what we need to do better inside of organizations. And I refer this as socio-technical architecture, what I mean is the architects and people trying to manage the whole system, they need to understand the multiplicity of agencies that are engaged in this mediated text that they’re trying to manage.
Kristof:
Which is fascinating because at least for me, this is kind of like opening up this image of thinking about software as just this deterministic thing. And it’s like, it’s just codes, you execute it. And then you’ve basically, you’ve just inverted the definition, which is now, ‘Well, actually this is an interface between complex systems.’ And it’s all about the complex systems rather than just about, you know, what numbers that are being crunched in the middle.
Jabe:
Or you know, the argument ends up being from a socio-technical systems perspective that if you optimize one of those subsystems without considering the other, if you don’t jointly optimize it, you get a suboptimal result, you get a negative result. And so like, you know, the other version of this that I try to tell people like you, you know, like I was using this kind of language about the CPU having agency and people are like, ‘Ooh, like, that’s weird. Like, what do you talk about? Like, sounds like neo-animism. Or like you think that the CPU thinks, or has ideas or will or whatever?’ Well, I think like technical debt is a perfectly good example of the agency of a deterministic system, right? The agency is that that software engineer wants to do something, but the code is like, ‘That’s not the easy way to do it. The easy way to do it is the way I want you to do it. I have an opinion about how this should happen.’ Of course, that’s in some way, like someone stuck those ideas in the computer, but that doesn’t mean that the mediated deterministic system isn’t now pushing back on the human agency. And software engineers, I think generally have a sense for this because they use all sorts of weird language to talk about code. They talk about code smells, right? Or having bad taste, right? Like they use non-logical, non-rational descriptions of the way that code is. They talk about a sense of the code. And so to me, all those things are trying to describe again, the fact that the code or the materials, conditions that are established in the system have an impact on the decisions that the coders can make. And that means that, you know, again, we have to recognize that it’s a jointly optimized system. It’s not, you can’t do just one. It can’t do whatever you want in software. And I think like, again, I, you know, I talk about this stuff sometimes, but there was a huge amount, when we talked a lot in the early two thousands about like knowledge work and these ideas. We talked a lot about like the intangibleness of the product that we were making, right? We talked about the idea that you couldn’t touch it. And so like that. And I think that that misled a lot of people, because there’s a difference between intangible and infinitely flexible, just because you can’t like touch it doesn’t mean that it’s not solid and that it doesn’t push back against you and have its own kind of weight and that the interactions don’t cause some sort of like stability or cohesion to happen. So anyway, I think that’s some of the stuff that we have to think through when we talk about… when we’re trying to think about software engineering through kind of a complexity-lens.
Kristof:
In a previous interview with Matthew Reinbold, he said something really interesting, is that he is aware of complexity, but he never talks about complexity, which talks to what you said earlier about people being uncomfortable with it. How do you manage that with your customers and your engagements? Do you talk about complexity with your customers or do you also use it as the secret sauce? Like, because I think that it becomes kind of like your secret weapon, because you’re okay to go into, into the madness and come back with the wisdom and then other people are not comfortable going into it. Is, is that also what you do or have you found a way of talking about complexity that resonates with customers?
Jabe:
So I like to lean into that idea when I’m talking to software engineers and things like that, I like to lean into the… So like when I talked about people having like using terms, like code smells and things like that, I usually refer to that idea as like it’s a phenomenological understanding. It’s not a rational or logical understanding. It’s like, this is what it feels like. And so when I talk to people about complexity or complicatedness, you know, different ideas, I tend to try to talk to them in those phenomenological terms so that they have a sense of how their body reacts to complexity, as opposed to trying to help them understand the rational versions of it. Because then I think if I can help them to recognize when they’re experiencing complexity, then we can talk about what tools and ways of engaging with the complexity works like, right. So, you know, sometimes I say like complexity is almost the experience of complexity. And I think that, you know, to me, this is an important kind of version or aspect of it. The experience of complexity is doing a crossword puzzle and the word is on the tip of your tongue is the way we say it in the United States. Like it’s, ‘I know if I keep thinking about this, it’s gonna come to me.’ And so that experience of expectation that you will… Not that you know the answer, but that you know you can arrive at an answer, that is the experience of complexity. That’s the way people experience complexity when they work with a complex system. And noticing that, like stopping and noticing that. It’s like sometimes feels like a tightness in your chest, or again, like it’s on the tip of your tongue type experience, noticing that and then going, ‘Oh, like, like we’re, I’m having an experience that is about emergence, right?’ So I both need to kind of like, not expect that I’m about like, I can’t simply expect that it’s gonna come to me right now. I have to let it emerge, but also like I should get other people involved in this and try to help them understand the way that I’m thinking about it, because talking about it helps me to emerge the ideas and sketching, like, you know, so there’s certain tools that end up being useful when you’re in that state. And that’s the way that I try to talk to people I think about engaging in complexity. Otherwise I use complexity theory as first principles for building models that are closer to what people would already be thinking about. So again, like, you know the three economies concepts are complexity informed, but I don’t directly ever say, unless you want me to talk to you about it, the complexity versions of it. I don’t explain how I got there. I only explain the implications of the model that I’ve built. So I think it’s like, blending the veggies into a smoothie to trick the kids into eating the vegetables. And then I think that other thing I would say is that this. I always, so I’ve taught, I don’t know, 200, 300 enterprise architects all over the planet over the last eight years. And some of them want to know, like, ‘The way that you arrived here is weird. How did you get here?’ And then you, then you can unwind the complexity and help them get back to the first principles and build the model back up from there. And some people really appreciate that. And that helps ‘em understand how to build other models. Right. So yeah, I guess it’s the question of like when you’re consulting, are you trying to help them understand how to build models? Are you trying to help them understand models so that they can use the models more effectively.
Kristof:
I really want to hear this background story now, but I don’t know if you have time for that.
Jabe:
The background story of the three economies?
Kristof:
Yes!
Jabe:
I was working in a Fortune20 and we were trying to install a product line model. So we wanted to install a flow model and at the same time they wanted to increase efficiencies. And the product lines themselves were highly decentralized as in like they had their own CTOs, like it was big, big scale decentralization. And so I had to figure out a way to talk about concepts of federation, centralization and and decentralization. Like these, these are the three big things that they needed to talk about and how they were related. And I brought a lot of the work that comes from reading again about socio-technical theory, but also kind of reading a lot of Ashby’s conceptions of the way that complexity grows in systems. And then also like I have these, you know, obviously because it’s called an economic model, I have these weird ideas about economies and I wanted to have a way of describing investing and like what they were investing in. And eventually I came up with this insight, that part of, part of the problem with Ashby… so Ashby’s law is basically this argument that says like every agent in a system, the more complex the environment that they want to engage in, the more complex their internal model needs to be of that environment. Right. So like basically get internal mirroring of the external system. So like really, really quickly. An old-fashioned thermostat has like two registers: the current temperature and the ideal temperature, and it compares those two. And then it turns the furnace on and off based on the comparison between those two things. So it’s got two registers, very simple system. I have a Nest, it keeps track of like the external temperature, the humidity, you know, it’s got dozens, if not hundreds of variables. And the result of that is that it’s much more efficient because it can curve the energy utilization closer to the environment, because it knows more about the environment, right? So that’s kind of the way that complexity increases. And so my insight one night was roughly that I think that most people think that that variability is just in the system. And what I thought about was, ‘Wait, what if there was a way of saying that the variability doesn’t have to be like smeared across the system, that it instead could be investe in intentionally across the system, would there be places where variability would be good and would there be places where variability would be bad and then you get kind of…?’ And the really interesting kind of sub-version of this is that there’s inside of Toyota there’s a weird thing called 4VL. And in there the Vs re refer to the difference between variation and variability. And so variation is good and variability is bad. So what’s the difference between these two things? Well, variation is the intentional creation of difference. We want different variations of our car models. That’s good. It allows us to tune our system to the marketplace. Variability is the unintentional creation of difference inside the system. Bad. We don’t like this because it increases costs and lowers efficiencies and causes defects and things like that. Right. So it is noticing that the difference inside of a system is valued inversely, right? In some parts of the systems is good, in some parts of the system it’s bad. So now you get a model that says, okay, so what we’re gonna do is squeeze the variation outta certain parts of it, and basically allow it to exist in other parts. And then I, you know, my argument ends up eventually being that like anything, because again, this is a kind of, I think a common conception of complexity, a system can only be so complex before it collapses. Like the people can’t manage it anymore because it’s too complex. Well, this means that there’s a budget, there’s a limit to the amount of complexity that your system can have, which means that it’s an economic decision because the limit is what makes it an economic decision. It means that you have to budget it. And so instead of budgeting it across the whole system, intentionally invested across the system in appropriate ways and then you get benefits from it instead of just smearing it across the system, like most people do. So, and I think there’s some really interesting, when we talk about APIs or platforms, which we should do at some point, I think there’s some really interesting implications in what happens when you don’t invest correctly.
Kristof: : Wow. Okay, Marc. I’ve just gotta sit here for a while.
Marc:
Okay. So I’m gonna add the link to the Three Economies to the show so people can then actually pick it up. But underneath the three economies, there’s also… You talked about it just quickly earlier, it might be worth going a bit deeper on your understanding of the commons. There’s also probably quite important in understanding current climate politically in the world, et cetera. Could you maybe explain a little bit how complexity and the commons, how they hang together?
Jabe:
Sure. so like if there’s three economies which in the model there are, because that’s why I call it the three economies. The three economies that we talk about are called differentiation, scope and scale. And the really quick version of this is that in most organizations there’s a dichotomy. They only have two ways of managing the system. They only understand two different economic theories. Differentiation, often like called innovation in organizations, like any activity around product ownership or product management or design-led or innovation, all those things are differentiating economy ideas. The idea is that you create value in this economy by creating something that’s unique in a marketplace that’s novel or different that, and you know, basically it thinks about the world as a customer comes and compares two things and picks one of them. And if they can’t differentiate between the two, you’re not doing a good job in that marketplace, you should differentiate your product from the other products. That’s differentiate. Scale is the other common one that everybody knows about this is when your boss comes and says, like, ‘We need to be more efficient.’ “We need to be able to do more with less” is another kind of common phrase. And you’ll see this often in operations and there’s specific reasons for that in operations which we can get into a little bit at some point, but it’s highly repeatable tasks where the repetition allows for the perfection where I don’t mean literally getting too perfect, but like the refinement of the process and the ability so that it squeezes waste out. The naive understanding of lean, I think, would be that what we’re talking about here, right? Like like six sigma style lean. And then Scope is a different argument. It requires understanding a really interesting idea, I think, which has this idea called the tragedy of commons. So the tragedy of commons basically argues that imagine you’re in a town and everybody has cows and there is a common pasture. And for years you’ve all agreed, no rules. Like, it’s not like there’s some sort of town council that did this. It’s just like, you agree amongst yourselves. Like everybody can have one cow and we’ll all share the pasture and it’s worked out for years. And then at some point some guy comes along, who’s more sophisticated in capitalistic frames of how to leverage marketplaces and goes, ‘Let’s see here, if I put three cows on the field, the field might collapse or people might abandon the field. It’s hard to tell what’s gonna happen. Either way, I’ll have three cows. Everybody else have one cow. And even if I destroy the field, I’m still up two cows for everybody else. So screw it. I’m taking over the field, nobody’s there to stop me.’ And it causes the field to collapse, right? And then all the cows starve, then nobody has cows, but the guy who started this activity temporarily had the advantage. And this is a problem that we see in a lot of places. There’s some really interesting aspects to the whole story. First of all, the person who came up with tragedy of commons was like a blatant racist. And he was actually trying to argue that people should not be allowed to immigrate into the United States because the United States was a commons that would be overrun by people who were not responsible. So in its original form was an entirely racist argument, which is interesting. But the second thing is this, like when we think about it, one of the qualities that ends up being really interesting is that it’s talking about a system where use causes consumption. So use is like doing a thing and consumption is that when you use the thing, it goes away, right? And so tragedy of commons is limited to systems where the resource under management is consumed in use, right? So in IT, the stuff we all do, or stuff that at least the three of us do, I don’t know what people do on the are listening. Those are specific resources. CPUs, networks, disc drives. You can overwhelm your CPU capacity. You can overwhelm your network capacity. You can overuse your disc capacity. Those are all exposed to the tragedy of the commons. If you don’t regulate their use with some sort of centralized scale based economy, they will be over-consumed. And of course there’s the opposite of that, right? If you, if you just go out and buy, willynilly buy up all the disc drives on the planet, then you’re likely to have overinvested in capacity and underuse the capacity, which is also bad. Right? So things that can be consumed are exposed to those types of market risks. Now, there’s a whole other set of things that don’t work like that. They don’t get consumed in use. And my argument in The three economies is that these things end up being things that look like software patterns, APIs, common functions, data, cloud native patterns, right? Why? Well, if someone like, if you have a centralized data store, if people, if bunches and teams all start talking and referring to the same customer data, does that make the data less valuable? Does it get, does it go away? If all the teams are using it? No, it doesn’t go away at all, but it actually becomes more valuable. So you have a system in your organization where use is not consumptive and increases value. So some parts of the system degrade in value when they’re overused and can be over consumed. Some things cannot be over consumed and actually increase in value as they’re consumed. So the scope economy is based on that. And then the idea of commons is that the easy way to build a commons inside of an organization is to focus on these non-consumables, these things that increase in value in use. And the argument for the kind of transitionary phase inside of three economies is to say this, if you only had the first two economies in your organization, you only had differentiation and scale, those are the only ways you would think about things; and you acknowledge that there’s some things that are non-consumable and gain value in use. It means that you’ve been using the wrong economic paradigm and undervaluing some part of your system. And you should find those things and move them into the commons of the scope economy, and that is in most organizations, a platform. You should make a platform and you should put those things on the platform. You should make them easily accessible. You should lower the barriers to entry. You should accelerate use. All those things end up being ways of unlocking a huge amount of value in assets and resources that your organization already has, it’s just using the wrong economic methods to manage them. So the two ways you see it as being wrong in most organizations is either you’re managing with central IT, and you’re managing as if it’s a consumable resource so you’re restricting access to it. So there’s all sorts of policies and change review boards and nonsense that prevent people from getting access to the things or slow the access down, that makes it a pain in the a** to get. So like, basically you’ll see a lot of this the naive version of this as being like self-service access is the first step towards a lot of this. Like, just remove the barriers to getting access to these non-consumable resources, right? So that’s one thing. And the other one is like shadow IT. So these are, these are resources that are being managed in a differentiation economy because the teams inside the differentiation economy cannot access the things inside the scale economy fast enough. So they build their own, but then you get lots of parallel versions of the same thing owned by lots of different teams. And you get variation inside the organization in a non valuable way, because that variation isn’t targeted towards the market, it’s targeted towards the infrastructure of your company. And that’s bad variation as we defined it at the beginning. So what we do in theory is we go through the organization, we try to identify all those shadow it, or pieces, those things that should be put over onto a platform. We try to find all the things that are being managed by central IT that are not consumable. And we try to expose those in some sort of self-service access. Once we get there, then we start doing things like applying patterns to it. So that we’re minimizing the variation a little bit further by not only de-duplicating, which is frequently, what people kind of hear this as, but it’s not just, de-duplicating the resources themselves. It’s actually limiting the number of configurations so that you get less cyclomatic and polymorphic complexity. So you don’t have an explosion of the different ways that the components get put together. So like, you know, one of the things I say to people sometimes is that lots of organizations go out and they think that Amazon AWS is a platform and Amazon is not a platform. Amazon is infrastructure as a service, they give you access to primitives. It’s up to you in your organization to determine how those primitives fit together. That’s what a platform is. Platform is how the primitives fit together in a commonly understood way so that people can not create all sorts of permutations of the way that they interact, but already have a pre-defined set of ones. And, you know, again, I think there’s market forces at play. That’s why Amazon is exposes primitives, it’s because that’s, they don’t want to declare themselves like a banking platform. They make lots of money by not declaring themselves a banking platform and just giving access to primitives, composable bits. Now you get like, ‘Okay, well, how do we compose those things?’ That’s your platform. The opinionated composition of primitives is the platform, to a certain extent. So, that’s the commons. And the challenge here I think is, for most organizations and why commons and commons-ing is challenging, is because most organizations, even organizations that do agile and all these kind of more advanced, or kind of in vogue ways of working, are still team based. And they’re often highly focused on the differentiation economy. And so the result is that teams very rarely or ever asked to consider how to share the creation and maintenance of a resource. They don’t know how to do it very well. Most organizations have very few ideas about how to do it, right. So to me like for instance, I don’t think when you look at and again, I, you know, Amazon from a business perspective, at least profits-wise seems to be perfectly successful. So this just proves there’s lots of ways to organize things. But to me, the kind of cell-based structure of Amazon, the like pods that only interact through APIs there’s no, that means there isn’t really a commons there. There’s just direct interactions of cells at that point. A commons would be that they have a way of contributing to essential shared resource that they’re managing altogether. And I do think, for instance, that places like Google clearly have a more of a commons-based theory of operations. Like they build a platform that everyone shares. You can see this in like their cookie-cutting or architecture concepts, but also you can see it by the emergence of Kubernetes inside, you know, which was originally Borg inside of Google, which is to me, again, response to complexity. So the thing that I often say is the reason why platforms emerge naturally inside of an organization is because they reach a tipping point of complexity where they can no longer manage the complexity that’s being generated by the differentiation teams. And they need a choke point a way of like down-, like I usually call it like a gearbox ,and the platform is a way of down-stepping the complexity a little bit so that the infrastructure has an opportunity to do scale-based anything. Beause otherwise you get just too much differentiation all the way back into the scale economy. And I think that the reason why you see a lot of organizations struggling right now and working with platforms is the same reason that Borg emerges at Google. Certain level of complexity causes platforms to be required for the system to be operable. Otherwise the complexity gets smeared, like I was talking about. And so what we’re having right now in a lot of organizations is… You know, I think that early internet, the first version of this kind of commons-ing was open source. So the internet reaches a level of complexity interactions, where you get an emergent property of that network being open source projects, right? Google reaches that same tipping point and has a different answer to it, which is Borg, which is a similar way of like dealing with the complexity and downgrading it. And now what you’re seeing in organizations all over the world is that they are arriving at that tipping point of complexity. Their systems are getting so complex that they cannot continue to manage them without adopting some sort of platform. And that’s, what’s happening. They’re not doing it because it’s a cool idea. They’re doing it because there’s no other way to manage their systems.
Marc:
You use their concept, you named “the downgrading the complexity”. So I haven’t talked to you before, but I know that this is something very different from what most organizations actually try to do when they face complexity. So what most also tech leadership often advocacy is actually ways of reducing it.
Jabe:
That’s right.
Marc:
What they go after obviously is often the artificially human-induced complexity that we don’t need. So management overheads, and other stuff that we don’t need to talk about. But I think it will be interesting to understand what you mean with “downgrading”. So how is that different from reducing complexity?
Jabe:
Yeah. So I wanna try to paint a little picture and see if I can get you guys to think about it, just kind of verbally. So when I think about this, think about the platform… There’s many teams and the many teams are all addressing different niches inside of a market, right? Like there’s like someone in a bank, there’s someone who’s high net worth clients, retail banking you know, whatever. They have all sort of different niches they’re trying to deal with. And so in theory they need access to computers and networks and databases. And then, then they could build their own system from scratch. Each of them could build their own system from scratch. Obviously this seems like eventually, maybe not the best idea, but you know, you could do it. At some point there’s a certain level of complexity that gets involved where like, ‘I want to transfer from a retail bank. I wanna transfer my retail account to the high network worth account, beause I won the lottery’ or something like that. And this starts happening more and more frequently and interactions between the systems become more and more aggressive and entangled and then people start going, ‘Okay, we can’t do this thing where each product line has access just to primitives. What we need is to have some way of choking or downgrading, the complexity where we share some of the resources. So for instance, let’s try to centralize or common the user accounts. So that at least Jabe isn’t spewed across like 10 different databases in different shapes and stuff like this.’ Right. So then we go off and we start building a platform to share data or something like that. Now I want you to think about the idea being that, what happens in organizations when the platform isn’t easily accessible and easy to use is that the differentiation economies, because they’re so tightly bound to market cycle times, if they can’t get what they want within a market cycle time, they’ll go around the platform and build against the primitives anyway. And what ends up happening instead of having a downgrading of the complexity, where the complexity in the organization or the variation in the organization gets smaller and less, as we move more and more towards infrastructure, what ends up happening is the complexity literally skips the platform aspects and leaks into the infrastructure groups, right? And the result of this to me is interesting. And I–again, it’s the thing that I don’t think people think about well enough–is that there’s two cycle times in your organization, the platform is generally isolated from them. It’s one of the reasons why a platform is an interesting kind of place inside of an organization. Platforms often don’t directly touch the market. They’re isolated from the purchasing side of the market by infrastructure, right? In other words, the people who buy from the market are in your ops department and the people who sell to the market are in your differentiation economy. So the platform itself ends up being kind of isolated from the market. There’s of course the idea that your platform might be directly consumed by a marketplace, but let’s leave that to the side, because I think that there’s different ways to deal with that. But anyway, the result of this is that one of the cycle times for buying like… hardware, they’re really big loops compared to differentiation economy, they’re like two weeks daily, whatever, like they spin real fast right? Well, your infrastructure department wants to replace their routers every three years, wants to replace the CPU pool every two years. They want to know how much you’re gonna consume so they don’t over-purchase, but they also don’t want to under-purchase because they get penalized for over- and under-purchasing. So these cycle times are important, right? And if you entangle the two, what ends up happening is you tie the hands of the differentiation economy by saying like, ‘You need to upgrade Oracle right now. You don’t, you run out of chances. You gotta upgrade Oracle right now.’ And the product owner comes back is like, ‘I’m about to miss like the Christmas marketing window!’ And they’re like, ‘I don’t care. You gotta do it.’ Right. So that’s the one way it can happen. The other way is what happens a lot: the differentiation economy overrides the operations infrastructure. ‘You need to go out and go to eBay and buy every 3X, 386 processor on the marketplace because we can’t find any more and we’re not gonna upgrade the software that runs on it. So you absolutely have to go purchase this wacko stuff that we’re over-committed to.’ Because there’s a direct tie between the applications and the infrastructure itself, right? Or ‘I’m in operations, I’m gonna buy you. We’re gonna move from spindle disk to RAM-based you know, or Silicon discs.’ And it’s gonna radically change the way you should be using your database indexing. Well, what if you’re so, you know, stuck inside of your market timings that you’re not able to do that? Well, the whole idea of the platform is that those, that stuff happens on the platform level because the platform level is to some extent isolated from the market itself. Yeah. So there’s a team that’s living there, that’s negotiating like upgrading your Oracle databases, re-indexing your databases to use the news disk drives, blah-blah-blah. And so that you get this idea that, you know, that part of the organization ends up being the part of the organization that enables these other two to operate in their native or natural cycle times. And in theory, you stop paying the cycle time or the market timing costs that happen when these two things get entangled. Right. What we’re trying to do is we’re trying to create a natural way in which the variation that we have in the differentiation economy is consuming slightly less variability in the platform. And what I mean by that is that like, again, ‘Let’s consume the primitives from our ops department in predefined patterns, where the differentiation teams have to give me a very explicit business reason why they should be able to configure the primitives in different ways or to request different primitives, right? So that we then maintain this isolation between these two. We can upgrade the primitives in a cycle time, different than that. If we don’t get that to happen and we don’t get the adoption of the platform to happen, then these teams will go around the platform and start directly consuming the primitives again, and you’ll end up with the cycle times tied up, because they’ll get temporary advantage out of it, but they won’t get long term advantage out of it. I don’t know if that answered the question, but that’s the idea that I have in my head when I talk about that, the way you want to kind of like step down the complexity.

End of Part1 of the episode we recorded with Jabe Bloom. You may also wish to visit part 2.

Newsletter

Articles on devportals, DX and API docs, event recaps, webinars, and more. Sign up to be up to date with the latest trends and best practices.

 

Subscribe