Skip to main content

A superior technology doesn’t mean success - Interview with Dawn Ahukanna, part 2

Oct 25, 2022

We continue our talk with Dawn Ahukanna (Design Principal and Front-End Architect at IBM). She points out the importance of variability and sheds light on what makes an API successful. Dawn also makes an exciting analogy between REST and SOAP APIs and the VHS and Betamax cassette formats. In this conversation, we also deep dive into how one can define complexity - if possible - and whether complexity can be helpful.

Hosts are Kristof Van Tomme (CEO & Co-founder, Pronovix), and Marc Burgauer (Principal Consultant and Co-founder at Contextualise).

Also available on Anchor and Google

 

Transcript

Marc:

You said something earlier that maybe sort of spark... Spark a question that I would like to explore with you, because you will understand what I mean by it. So you said, I don't know where it comes from, but I have experienced this in, so it's articulated in all sorts of fields. You're here to agile, you're heard in a service design, lean, but start at the end and work backwards. But actually, and I'm going back to using bio- for your talk. You used the biological metaphors and Alicia Juarrero says "we are now entering an age where we should get away from physical metaphors to biological metaphors." But when you say “the end,” actually what you mean is the whole. When we build an API as a whole, we want to start thinking about what it does in the environment? How does it sit in the environment? What does it correspond to? Who's benefited before we think about how we build it? So when you say the end, that's what you mean?

Dawn:

When I say “the end,” I mean what impact is it going to have on the person that's using it. Remember I said about that three stack 16 stack Venn diagram, right? If you only look at the containing system, which is a system's thinking approach, that just gives you a higher order abstraction, there's an infinite number of possibilities in that higher order abstraction. By starting with, what is it going to be used for then if the way I think about this, is like you have a dart board with a bullseye on it, and unless you are hitting the bullseye, you can hit the edge of the board. That's not what you want. You can hit the first rate. That's not what you want. If you can hit the second quadrant, that's the bullseye. And unless you are in the vicinity of that bullseye, it is a way of course correction. Then, you already know, “okay, somehow I missed a trick, I deviated too far, I need to come back and course correct.” And it's literally like guide groups. So, by making sure that you can always achieve what your outcome is, what happens a lot of times is the pin that you dropped and you said, “that's where your bullseye is moose.” Because you have a better understanding and you realize you were pointing at the wrong bullseye. So now you need to adjust the location of your bullseye and then start that whole thing again. But at least you are always oriented in this abstract space, trying to figure out what it is that you're doing. So that's what I mean by the, I literally do mean, “the end.” What is the purpose of this thing? And work backwards. First start, yes, look at the surrounding or containing system and make sure that at least your Venn diagram is the thing that you're working on is not an in, is not hanging half out it's fully contained within that system or within that circle.

Kristof:

In that context, how do you deal with unexpected affordances, that the system...? So this is one of the things that I'm super excited about with APIs is that, and I've told this to customers like, "if you are not being surprised by what people are doing with your APIs, you've been doing it wrong because it means that you are not providing sufficiently abstract affordances that people can do new things with, because that's the whole point." But in that viewpoint, it's like you don't even know yet where the bullseye is. Or there's some hidden bullseyes where there's like a multidimensional set of bullseyes.

Dawn:

You pick one. You pick one and you start with that. And then when you, even when you get there, you figure out, "okay, actually that's, I needed to go a little further left," but you always have a focus that's one. And then something that you said, Kristof about not necessarily being sure or you have a space. Then, to borrow from DDD, draw the bounding box. As long as you are inside that box, then you'll have a pretty good chance of being able to create a set of APIs that make sense for that context. But if you are outside of it, then chances are pretty low. So it's really about either having somehow defining the scope of your focus and it can't be too large. I prefer the bullseye version because then nobody can answer that. That is the point X, Y, Z. Is that where we are? Are you there? No. So let's keep going. And it saves some of the bikeshedding conversation, "but what if we went left?" No, stop talking. Just do, let's get there. When we get there, then we'll figure out if we're right.

Kristof:

In the preparation you said something that really made me think or that I'm curious if you have some really deep meaning behind it. It was, because one of the questions we had in the preparation was: what's the difference between REST APIs and SOAP APIs for you? And you said VHS and Betamax. What's the story there?

Dawn:

My preference is SOAP.

Kristof:

Oh, interesting.

Dawn:

Because it's specific, if you have a SOAP API, the SOAP API self defines and says what attributes it's going to send. There's nothing to guess. If you make a mistake, you get a correction. Http REST API, you get a bunch of JASON and then you still have to inspect and interpret and figure out other than if you don't have an error condition, like a 500, so the server's blown up or 403 denied. That's a clear server response. Once you get a response, then you've got a whole bunch of introspection to do, to figure out “what did I just get? Is it valid? Is it not? Why do I have to do all of that when I ask for something?” Either give it to me or don't. But then the thing that bumped the, so, I think it's superior and I'm not quite sure and that's why the analogy with VHS and Betamax came in. Now, the story behind that for people who are much younger and I was talking to my niece the other day: we used to have a recording format, which was magnetized tape for video. And there were two formats that were battling it out. Now, Betamax was much more superior, technically it was a smaller cassette, it was higher quality. So SOAP. VHS won out because it was more accessible. So people never even heard of Betamax. Have you heard of video tapes? I don't even know what VHS, I can't remember what it stands for, but that became the standard format for video format recording, recording rentals. So just because you have superior technology doesn't mean that it's going to be successful and it's the right answer at the time for what it needs to-...

Kristof:

We once had a conversation with Robin from TUI. And he said that for him SOAP, because it was so well defined, it coupled the different parts of the system too tightly together, or well, this is my interpretation. I'm paraphrasing very much. By having it less well-defined, so by having an actually slightly crappier system or, well, maybe not crappier, but like, and I'm paraphrasing, so these are not his words. These are my words. But it enables new unexpected uses or that, it separates different parts of the system and creates that boundary again, that semi permeable boundary, where the same message can mean different things on the two sides. But yeah, you're right. It's cleaner if you can define the domain, if you can define all the different attributes and have it.

Dawn:

Well, it comes back again to that conversation. I was thinking about space. My personal opinion, that API really thrives is the commodity space. Don't keep reconfiguring your washing machine. The interface is stand, it's the same, but as a designer, things don't just turn up in a commodity space. They start from genitive all the way through and have a whole bunch of evolutions and interactions with people in order to navigate what's the path from an idea to... this is the configured use. And that's because we're human. That happens because we are human. And if you don't allow for that, then any system that comes prepackaged, preconfigured, pre everything, you will target the people who prefer that. And everybody else will say, "don't care, no value to me. I'll go off and do something else." And this comes up again and again and again and that's why that story about VHS and Betamax literally bumped into, jumped into my head when I saw that question, because I personally, for the purposes that I use it for, with the SOAP, but every day I am using http REST APIs.

Kristof:
I think you'd even have a deeper layer to that comparison, because I read the story of the business side of “why did VHS win?” And as far as I remember, it was because they had a bigger consortium. So they had a bigger network, like a bigger community of people that were...

Dawn:

Accessibility. So that was made more widely available and it didn't matter how fantastic it was.

Marc:

A very crucial point was that VHS was good enough. So yes, Betamax was technically superior, but the user experience... So I'm probably one of the few people who actually ever saw a Betamax kit at home. My rich friend had a Betamax next to VHS. And unless you actually had an expensive television, you couldn't tell the difference. You needed a high resolution monitor, which at the time was sinfully expensive to actually see. The VHS was good enough.

Dawn:

That's an excellent example, Marc, because the API is fantastic, but it needed another thing in order to exercise the full value. And you are still plugging into some crappy TV. So why would I spend $500 on a system when I could get the same experience for a hundred dollars?

Kristof:

I think it's also slightly more lossy... Well, because it's less defined. It also enables the network better because it can be different things for different people or for different organizations. So it's a very interesting allegory: VHS and Betamax.

Dawn:

You got me thinking, Kristof, because that, I don't know what it is: it's random noise. There has to be some of that in every experience with people. I know that as a designer, but I never thought about it until we were talking about it now, in terms of these definitions and yes, the businesses could have forced Marc, but it's not just that it was good enough. It was expressive enough.

Kristof:

And because it's lossy because it's less defined, it also can dance a bit. And I think because that was with SOAP, you had to understand the full system in its entirety and predict what you're gonna do with it in all the different parts of the business. And you couldn't just switch what one part of the business was doing, because you had to change your whole API, because, you know, the model had to change if you changed it in one location, which meant that the whole organization had to change. And that made it really expensive and difficult. I've talked with people that are doing these, you know, "we're gonna create the one, the one model for the whole company thing."

Dawn:

As soon as everybody says that, I just take my laptop close and then I leave. Because, good luck. Clearly define the ones that you will cater for and leave the space again. The noise. Definitely got me thinking about this. Maybe it's not noise, but, another factoid: 80% of an atom is empty space. 80% of humans are made up of water. This thing keeps coming up again and again. There's this allowance for space and flexibility and for something to be created. I don't know if it even has a name, but if you create an experience that doesn't have that, audio files have gone back to records, because you can get the crispness and experience that is unique. Is not just a snapshot and repeatable and the same. There is variability. Without that variability, then I can almost guarantee you are on a high end hill. So every time I hear somebody say, "oh, I'm going to create this once." Good luck. It's fine. Call me when you're serious about making something that people want to. Because that is not the human experience. I don't have a definition for it, but at least with my experience with being a designer, there is something about that. It's not a void, but space. That allows for other things to happen.

Kristof:

It's the variability, that's the point. And the constraints are enabling variability. It's not prescriptive, it's enabling.

Dawn:

Yes. It's not prescriptive, it's definitive. And then knock yourself out and do whatever you want with the rest. As soon as you try to be prescriptive, then you are probably not going to have, you have some success, but you are basically going all the way from idea to commodity in one jump. That's never gonna work. So, that's interesting that literally popped into my head when you said, when I saw that question, HTTP or SOAP.

Marc:

I'm still a bit curious about this argument from your friend. Want to explore that because my understanding might be wrong about these two frameworks, but I believe you could create every way an HTTP REST interface looks to the outside in a SOAP framework. So they are technically different, but SOAP is to the extent, more flexible that you could mimic, probably even REST if you wanted to.

Dawn:

But I think that Kristof has a good way of describing it. SOAP was prescriptive, whereas...

Marc:

Yeah, but you all were where they said the more prescriptiveness is something. So you could get the precise information that you wanted. Effectively had value, but you could just ignore it at the consumer end, I would guess.

Dawn:

No, no, it's a contract. It's a contract. It was a prescriptive and it was a contract.

Marc:

It might weaken the point I want to make a little bit. If we look at why VHS won over Betamax, it was not technical. It was a social. It was an outcome of social things.

Dawn:

I call it "not technical". It could've been social. It could've been economic. It could've been a bunch of things, but basically...

Marc:

We agree that if it would've been on technical merit, Betamax would have won.

Dawn:

Yes.

Marc:

So, it was social factors that made VHS more competitive, more viable or whatever. It just had to be good enough. So equally when we have technical standards of competing, that's my experience. Okay, I haven't been doing tech for a long time, but I did it for a long time before that. And it was always my experience. What would win, had nothing to do with the technical merits? What would win would always be determined by social components, social aspects.

Dawn:

Because it's no longer the limiting factor, Marc. Something else like Kristof saying, some other constraint now is the limiting factor and where we live now, today. We basically have computers in our hands. So the tech is not the limiting factor.

Marc:

So what are the limiting factors for making API successful, if it's not technical?

Dawn:

It's the application of those APIs. It's the social, it's the economic, it's the environmental. Let's take, I'm being facetious now, but let's take blockchain and crypto as an API. That thing is just ridiculous that, yes, it might be technically superior, but the impact that it's having on our already strained environment, it shouldn't even be allowed to operate. But because of other factors and incentives like greed, we're having the conversation about its existence. It shouldn't exist.

Marc:

So in this particular case, I blame capitalism. These things get a lifeline or they actually get a lot of life because there are ways of siphoning money out of other people's pockets. So there are promises, "Hey, you need a blockchain for this," that any techy person could, "every database gives you that you don't need a blockchain. You could just go and use my sequel and you will get most of those benefits." You could make it transparent if you want, you don't need a blockchain for that either. Same with crypto. So, well, almost all the benefits of this say they're available by other means. It's just like, "Hey, this is the new shiny thing." By making it look new and shiny, you get people to invest it. And if we wouldn't have such a focus on the constant flow of money. So it has to go from somewhere to somewhere else. Money can't rest, it has to be working, et cetera. A lot of these dysfunctions would not exist. I think most environmentalists, when they seriously look at what are the causes and what needs to change, they arrive at capitalism, is this engine that makes it so hard, making things sustainable. So crypto, blockchain, the way they advocate. I'm not saying they're entirely useless, but the way they advocate, this is just means of people getting rich. Or we see this now at the crypto implosions of the people who were early in making the money and got out. The people who making the losses were the people who were promised, "Hey, everybody's gonna win here."

Dawn:

And the visual in my head is literally they were telling everybody to point to one bullseye, but the real bullseye was somewhere else. And the technology by itself, is there anything wrong with it? No, but it was so misapplied. So misaligned that the impact is of what is the value. The value to our environment is adding to the pressure that we already have the value to people. I saw an article the other day where I can't remember which system it was. They showed the JavaScript and I just had to shake my head. They basically had logic in there that if you wanted to do a transfer, your balance got set to one. So your money was still in the system, but because the ledger got set to one, this is like, you go to your bank account, online bank account, and it says you have one Euro or one program in my case, or $1 in your account when actually you don't, but there's no way then for you to mitigate and correct that and fix the balance in order for you to acquire your money. How can somebody make a system like that? My head is exploding. How is that even legitimate?

Kristof:

I've been thinking a lot about transactional versus non-transactional relationships. The multidimensionality of value is.... it's a scary thing because we're trying to turn value experiences into something that's measured in the same standards. And that is collapsing a lot of the surplus value that was inside of the system and is removing a lot of the experience that people have. It feels to me that crypto is kind of- or I don't know if all applications, there's probably some ways, well, cryptocurrency probably. It feels that it's trying to do this tightly coupling thing that we're talking about so that SOAP is doing across the organization. But now what they're trying to do is to tightly couple all of humanity, where we're now trying to standardize the experience of value for everybody in society, into the single one dimensional money dimension. And that's just scary because it just doesn't allow for hat freedom, that dance that normally gives the possibility to discover new things and to develop new things. It freezes experiences.

Dawn:

Well, I agree with you about transactionality in terms of it's one point to another, and it's in one direction, but the example that I explained to you at least would have been that person's balance that was in their account would flow in the direction that they wanted it to. Not get reset effectively to zero, because of a built in error in that system. I mean, if that happened in the banking system, there'd be riots. People were told that it had the robustness of the existing physical finance system. But it was just electronic. And it turns out it's mostly electronic and not much else.

Kristof:

Without proper constraints to make sure that nothing goes wrong or somebody's gonna have to pay for it. I guess that's what regulation does.

Dawn:

That's true. But back to a question that you asked me about trying to describe complexity... My working theory is that you have to experience it. There is no one description of complexity. So I need to give people enough experiences so they can build their personal mental model of what complexity means to them. Me trying to define it for them is like me saying, “I prefer, I don't know, lattes and everybody has to drink latte back to the prescriptive.” So, creating experiences for them to build their reference, so we say the same word, but it means completely different things to the people, because what you're doing is you're invoking your own internal mental model and memories for that. But it's a facet of complexity because it's multidimensional. I'm okay with that. It took me a long time to get to that point and I was very frustrated that there wasn't one thing, but on the one hand, you can't, you can either, um, allow for that variability, or you can try and prescribe everything. I'm not in the school of trying to prescribe everything.

Marc:

Going even a little bit further. First of all, you can't talk to people about complexity that haven't experienced it mostly, you can actually also not talk to them if that experience is in the past, as in they have to experience it at the moment. And then you have a chance to explain maybe some of the things they're experiencing through some ideas informed by complexity. The only people you can actually talk about complexity - in theory - are people who are aware that they experienced it more than once. So have more than one context in which, I mean, Dave Snowden taps into that when he talks about the children's party, because a lot of people are parents. And so almost every parent experience complexity when bringing up children. As soon as you have two contexts in which people actually experience complexity and you start to talk about some of these models that we're using to explain complexity, they can also, "I can see how that works in this context and this context." And then suddenly it's a "now I see why this has value." But I was really pleased you said that because sometimes, you know, when we realize, "hey, they haven't experienced it. So that's why we can't explain it." we still ask ourselves "or am I just crap explaining it?"

Dawn:

No, people have to, but you said a really good point, Marc. Context and multiple contexts. So I find people who are bilingual understand people who have had multiple careers, multiple of an aspect. Back to my example about the three dimensional cube: if you flatten it, it's a T or it can be a number of configurations, and then you're "okay, so this is this side of this cube. And then this is this side. Okay. So I need to figure out what X, Y, Z coordinates I'm at, at the moment inside of this 3D space." But that comes with, I think, experience. And then once somebody, like you said, has experienced it, has experienced something multiple times, and then you give it a name: "Oh, that's what that thing is." It's a little bit like falling in love. No two people fall in love exactly the same way. Yes, they can tell you that it's oxytocin and blah, blah, blah, and a whole bunch of chemicals. That's not it. When you experience it, then you're like, "okay, so that's what all that conversation, the songs and all the... So that's what it is." But until you experience it yourself, it's very hard to describe.

Kristof:

But I think a lot of people choose to forget because it's this scary experience, because it's this superimposed. It's just like falling in love, actually. It's a really good analog. Because when you're falling in love, it's a scary, it can be very scary experience. This... is it or not? Does the other one love me or not?

Marc:

That's what I said. You actually often need to be there when they experience the complexity. Three months later they actually rationalized everything away. Would allow you to talk about the complexity aspect of it. It becomes a complicated thing.

Kristof:

It wasn't love anyway, because you know, they rejected. You know, “they rejected me.”

Marc:

Was just infatuation.

Dawn:

I think it's important to have language and vocabulary to talk about it. Because if you don't have a name for something, then it's very difficult to have the conversation and do the exploration and everything else. I'm trying to think of a time when the word complexity actually came out of my mouth. I think I'm trying to recall maybe in the last six months, but, and also because of my environment is applied - I'm not a lecturer, I'm not somebody teaching this - I'm usually solving problems or exploring certain spaces or trying to understand what just happened and what's going on. Another word that I think is overused, is emergence. People use emergence, like something just appeared out of nothing. It has been happening. Emergence means that we've finally become aware and can acknowledge and see that thing and maybe even give it a name. But it has always been there. So being a bit more humble and a bit more open and curious and interested in terms of what else is going around me that I don't notice that I don't see that I'm not observing, but it's happening.

Kristof:

It's this quantum state, where you're experiencing multiple things at the same time.

Dawn:

I'm laughing, Kristof, because seriously, I could spend weeks talking to you and Marc. I have these conversations. I can count on my hand how people I can have this conversation. If I said, "oh, it's a quantum state" people would be like, "OK, I think I'm leaving and going home now."

Kristof:

I'm worrying now. I guess I used this a lot.

Dawn:

I use a lot of those things in jest because yes, it is funny. And a talk that I gave two weeks ago to some of my AI colleagues, I mean, these are, you know, diving into the world of data scientists talking about algorithms and statistics and everything. And I decided to be provocative, so I've led with George E. P. Box' quote "all models are wrong, but some of them are useful." Now, if you start from the premise that all models are wrong, what are you talking about? You're talking about "model is wrong," but okay, let's see how we can apply it and use it. Helps to, I think, balance the whole "this is the truth, this is the prescription, this is the..." No. Even with the quantum state, it's a model for something that we've observed. And we're trying to understand that is not accurate because we're gonna understand the bullseye is gonna move. We're gonna say more and we'll call it something else. When I was in school, we had nucleus atom electrons. That was it. Now we have quax and quanta and all the rest of it.

Kristof:

I want to go back to your AI colleagues. Are they familiar with this expression that all models are wrong, but some are useful? Are they, is that something that's that people in the AI space?

Dawn:

I used that particular quote because the person who stated it, wrote it in full papers between 1976 and 1982. And he was talking about some really novel things at the time, which were amazing and he was just trying to tell everybody, "look, this might be novel and it's great, but it's just a model." Essentially, this is the representation, it's helping us to get a better understanding and he was a statistician. I can never say that word and a professor. So he was respected. If you, and I think it's really important to start from a place of humility. I don't know everything. I know a bunch of stuff, useless information, good information. I've worked and been around for awhile, but there's also a bunch of things that I don't know. So I should still be open to learning about those and pursuing those and finding out about those. Again, keeping the space. Once you're prescriptive and you're like, "I know everything"

Kristof:

I wonder if you almost don't have to be a utopist to be doing AI. You have this, because to be able to believe in the model that you are creating or to be able to work in AI, I think, you almost have to believe in the model that you're creating or the usefulness of it. Is this something that people are aware of in that space? Well, I'm not talking about your colleagues specifically.

Dawn:

Yeah. So we're talking about machine learning because, okay, let's be specific: AI is made up of a bunch of technologies. So we're talking about cognition and intellect, but synthetic. Some are focused around language, so natural language processing, speech to text, text to speech, all of that. So you get those massive, large language models. Everything from GTP-3 to the ones that are popular right now, which are DALL-E and... What is it? Something Fusion, Statistical Fusion or whatever it's called. Then you have all the mathematics around what is machine learning, but basically it's probably a stick evaluation. That's what that is. Then you have, Visual, I know it's called Visual Recognition, which is visual image processing, which is all about matrices. Being able to take an image, break it down and then that's why you can go from, you know, one to the other. So something like DALL-E and the like combine all three of those with something that is called transformer in order to be able to take a text, join it to an image and then take that image and then transform it and take it through a matrix and do that. And then you have things that people are familiar with, like the agents and chatbots trying to provide information and knowledge in the form of customer service. And there's a bunch of other aspects as well. I think, from my personal observation, from my colleagues who are working in AI, you have people who are really trying to - as scientists - understand and replicate human brain intelligence and evolving that understanding and doing that by creating those models. Those models are not reference models, they're exploratory models. Some of those exploratory models then are good enough, just like Marc said, for a particular application or particular use to be leveraged. The problem is that what most people haven't factored in with the whole sociotechnical interface back to people that we're talking about is the way that those models are applied. There is no capability for the person who is experiencing that to intervene. I can't walk into an airport and say, "I don't want you to use my face for any facial recognition anywhere." I saw a nightmare advert the other day, where if you look at a board, then it would put up all your details on a board. That is, I don't know how many meters in the air and how wide. I don't want to announce that I'm at the airport and I'm going to this particular flight. And they thought that was a good idea. Somebody thought that was a good idea. So it's not necessarily just the technology, but what was not built into that was the possibility for me to opt out, coming into that and saying, "if you see me just miss me, don't even acknowledge my existence. Just behave like I'm not there."

Kristof:

But isn't that a result of a reduction history of looking at artificial intelligence? Is there this expectation that there's a single truth that or there's a single bullseye that's not moving? And because people believe that there's just one bullseye and we're just gonna hit the bullseye, but actually that it's multiple bullseyes at the same time and different people are experiencing differently.

Dawn:

Yeah. But what you have to remember is that a lot of the things that we talk about as AI and models, that's the engine. Someone is driving the car. And the person who's driving the car is the one that's saying "this is going to be the prescribed use," which doesn't reflect reality.

Marc:

Better way of looking at it is not that they're multiple bullseyes, but a lot of AI or ML efforts. People are self-selecting what the bullseye should be without actually looking at the environment and where's this gonna sit. So they self determine the bullseye without any regard to the environment. And that's why it's then ending up being destructive. Whereas if they're actually from the beginning, if they're from the beginning, actually, "okay, so what's the environment in which this should live and we wanted to live and who are the people who will interact with it?" If they would start there, then they would actually have a decent bullseye. The issue is: they're not. So it's technologists and investors just going, you know, into their ivory tower literally and saying, "huh, let's cook this thing up."

Dawn:

I don't know you if you have seen it, but there's a Princeton professor and then - I forget his surname - and he has a PhD student. They had noticed in the last couple of years there were a lot of papers that were being released, that were in the political space. And one in particular said that it was 90% more accurate than any other statistical method, current method at predicting. Not forecasting, predicting civil war based on looking at GDP unemployment. And so they went and ran the numbers and it was not even just worse than the current tools. It was completely fictitious. So basically, what had happened in that case, but to your point, Kristof is because those researchers are trying to live within a system where they only get credit. If they publish certain papers and certain periodicals in order to advance, it's a race to the bottom. He called it the 'replication crisis'. But to me, it's not a replication crisis. It's a miss. I think Marc said really well: they chose a problem that they should never have applied that technology in the first place. Because the thing that caught my eye - I don't even know what the model is - the thing that caught my eye said it started off with saying that civil war is one of the most traumatic experiences that any human being would ever experience. And then you flippantly go and take some erroneous model I put on top of it and say, "we can predict this." So if they talk to socialists, talk to everybody else, it turned out that they, even as data scientists, the model was completely inaccurate. And then he held a workshop and they thought that maybe 300 people would show up and 1500 data scientists showed up. That's how concerned they are about this race to the bottom. Because again, it's another form of induced greed. Capitalism by itself is not bad, but when you induce greed and excess, then things get twisted. That was something that I mentioned and then one of my colleagues said, "oh, but they didn't do a gut check." It's like, and why do you think that is what was the motivation for not stopping to think that, "okay, this model that we created, it's a term in AI." It's called 'overfitting.' I'm sure you've heard of the example where way back, they were trying to determine military application, whether a tank was present or not. What happened was the days they took the picture for the tank was a sunny day. The day they took the picture for the other vehicles was a cloudy day. The imprints from the model was basically whether it was cloudy or not, because that's the thing that determined mostly based on supervision, whether there was a tank in the picture or not. So obviously they put it out in the field and they noticed that, "okay, this thing is crazy because it's calling things, shouldn't be" and it changed depending on the time of the day. The same thing was a tank that wasn't, and then somebody kind of finally figured it out.

Kristof:

It was lovely talking. I like the VHS versus Betamax because there are a lot of different layers. I'm gonna start using that, I'll attribute it to you. Very, very interesting conversation, Dawn. Thank you very much for joining.

Dawn:

Thank you. I had a really good time talking to you Kristof and Marc. I enjoyed this conversation. It's not one that I have that often before people start to look at me, like "okay."

Marc:

Maybe we can do something about that.

Dawn:

I'd love to, like you said, Marc, continue. How can this get expanded? How can we engage people as they're experiencing this and let them know that, "okay, that is what complexity is"? And “that over there is complexity. And this is, these are the tools that you can use to think about it.” Because my personal journey was trial and error, having wonderful conversations with people like you serendipitously. I'm sure there must be a way for us to be able to do that without being prescriptive. Maybe in a more deliberate way. One thing I wanted to ask both of you: how did you come to this topic, this conversation, this realization for the both of you?

Marc:

Different journeys. I studied biology and I came across complexity through, well, some of the things that interested me in the biology realm. So they talk more about chaos theory and strange attractors at the time, but then also philosophy of science. And then I got frustrated because I had no way of working in any... I'm coming from Switzerland and basically all the jobs are pharmaceutical. So there was nothing for me to work in. I actually just kept it as an interest until I bumped into Alicia at the lead UX conference and said, "oh, I'm not a load interest in this. Oh, there's a whole field." And I sort of read some of the things that came out and I got an awareness of that, but I didn't know how it progressed over the years from what I studied to when I bumped into Alicia. It has sort of for me always been around. Well, for Kristof, it was more, well you speak for yourself.

Kristof:

It was also been around like at university I had a course which was something... It was one of those courses that everybody was failing on and I was sitting in the course and it was like, "this is really, really interesting and it's amazing. I want to understand this because if I can understand this, I think I can understand the world" or some of that feeling. But it was so bunkers, because it was just like a diagram and you were like strange attractors and you had like how many times you circled a dot, determines if it's gonna be a stable system or not. I was like, "what?" But then that went away. I think I didn't fail that course. I don't remember. But then, like all these books about chaos theory and so on, just piqued my interest a lot, emergence, those kinds of topics. And then I think I got a new boost when I learned about Cannavan. That was one boost. Then I got another boost. I read 'Neither Ghost Nor Machine', which afterwards I learned was based on Alicia Juarrero's work. That book, it taught me, like when I read that was this flipping the world kind of like from just looking at the world as it is to looking at the constraints. And that just was like, I put on new glasses and I was seeing the world in a different way.

Marc:

But the start for this was also, so we had this conversation, Kristof and I, where we actually said, well, what we experience in the business world is usually when people are confronted with complexity, they look for a reductionistic. They're looking for a recipe. And most people when they even talk about complexity, in the engineering world, they tell you about how to reduce it. And as biologists, our experience is actually "no, no complexity is what enables novelty. That's what gives diversity, what creates all these lovely rations in the world." So that's deliberate complexity, I think that was Kristof's moniker, you came up with that crazy. But that's why we said "can we explore that specifically? How?" Yes, sometimes it's correct that you need to reduce, because sometimes also you create unnecessary complexity through misorganization and things like that. But what nobody was talking about was like, "what are the good bits sort of? Where should you embrace it and where should you make it? How'd you get it to flourish? How'd you get to the novelty? How do you get it to be enabled for things?" Complexity theory gives you some theory, but we want to talk to people and see what's the experience. So also, because as you say, if people are experiencing these things, hearing somebody talk about how their experience, that allows them to connect, "Oh yeah. So I didn't know this was complex. And now that I hear you talk about your experience of it, I can connect it."

Dawn:

I think it's really the last point I made, as you were talking Marc, I literally looked at my career and I think we know it's there, but because it doesn't have a name, we can't see it directly, we kind of go looking for it. I started out as a scientist, we know everything. Then computing, discovering, exploring, and now I'm a designer literally in the novel area where, "okay, let's go and see what's what's happening." So I would love to understand what would a job in the chaos space looks like?

Kristof:

I think that we need to go beyond, so we come from this world where where we're trying to turn everything into machines, where we're trying to take people and turn them into machines. And if you want to go beyond the machine world, we need to understand, we need to become deliberate about complexity. We can't try to treat people like a part of a machine trying to determine what they're gonna do. We have to give them space to be complex and to fall in love. We didn't ask you, what was your story, how did you end up with complexity?

Dawn:

I think I saw things they didn't have a name. And I think to Marc's point, I was a little frustrated because I felt like I would say something people like, "what the hell are you talking about?" Like, "it's right there. Can you not see?" And so, okay, I'll go out and check it out myself and figure out. And then along the way, met people who I think were also on that journey. So I didn't have a name for it, I had seen the evidence of it. Now I realize why I couldn't describe it very well, because if other people are not experiencing it and aware of it and seeing it, then I might as well be talking,. Have always been looking for answers to "why?" So if you ask my friends and my colleagues, I'm an annoying, "why" asker: "Why, why, why, why, why, why, why, why?" There's a reason, but I'm not satisfied with the prescriptive "it's because". "Where's the evidence?" I say so. "That is not evidence. Why is it this way?" So, yeah, I think that's my, I would say, my professional journey in a nutshell, and why wasn't attached necessarily to titles. I would do anything that allowed me to look at that deeper. Okay, let's go for it. I don't care what it's called. And I have a lot of conversations now, people are "but, you know, you could have been a scientist." It's like, "yes, but it didn't have the answers I was looking for." So I tried something else. And "you could have been a CTO." Yes, but it didn't have the answers. For now where I'm at, I'm able to look at those things and have these conversations and get those answers. Those experiences have informed how I look at things and given me that multidimensionality, which is why one of the way you can think about complexity: it's like a crystal lattice, a 3D. So it's intertwined as opposed to just aligned like magnetic planning. So I think I've been exploring and had the opportunity to explore it. Hasn't always been easy, but yeah, so exploring.

Kristof:

But it was also a multiplicity in your career, like this switch over from scientists to design. It feels that it's hard to explore complexity if you're sticking in just one area. Like you need to be combining different things together.

Marc:

You need to experience it differently before you can see. “Ah, okay. So here's this context and here's that context and yes, there is a commonality in the experience.”

Dawn:

Yeah, and then there's also a collective aspect to it. It's not something you can do on your own, I think. In a nutshell, that's been my journey into complexity. I like the phrasing that you and Marc have, it is deliberate. There's nothing accidental or random about it, but it's hard to understand how it all intertwines and comes together.

Kristof:

Yeah. And we need to embrace it and become deliberate about it to make the next step in our evolution as a species.

Dawn:

I have been listening and going through your back channel and thank you for making this space. I know it cannot be easy. It's not necessarily popular, but it's very much appreciated.

Marc:

Very welcome.

Kristof:

Thank you for being part of it, for helping to make this space and for growing it together.

Newsletter

Articles on devportals, DX and API docs, event recaps, webinars, and more. Sign up to be up to date with the latest trends and best practices.

 

Subscribe