Skip to main content

Keynote: CTRL+SHIFT+(BUILD) PAUSE

About this video

This session was presented at NDC London 2025.

Copilots are everywhere these days, and… rightfully so! Let’s face it: these tools are incredible at getting things done. They have the potential to turn any one of us into a 20x developer. Need a new feature? Bam, there you have it! Refactor that function? Sure, it’ll be done before you grind the coffee for your next cup. These tools do a very good job of generating well-designed, tested, and performant code. Before you know it, you’re not just building a feature—you’re building 17 slightly different features simultaneously because why not? After all, the code writes itself!

But guess what? Just because you can build it doesn’t mean you should. Without a clear vision, our solutions risk becoming soulless Franken-software, a mishmash of best intentions and uncontrolled enthusiasm that don’t make the user’s life any easier. That’s why we need to remind ourselves that the true art of building great systems is more about what you choose not to do. More than ever, our mission needs to remain crystal clear: crafting lasting, impactful solutions that our users love.

đź”—Transcription

00:07 Laila Bougria
Welcome to NDC Oslo. It's amazing to come together here and learn about everything that's been going on in the software industry because, gosh, there has been a lot going on. And we're kind of used to this, right? Things have always moved pretty quickly in the software industry. But since AI has entered the scene, it feels like things have been moving more quickly than ever before, from new models being released every other week, maybe even every week, to completely new tooling being built by players like Microsoft, GitHub, and entirely new ones as well.
00:47 Laila Bougria
And to be honest, as a solutions architect who's focused on facilitating building distributed systems, keeping up with all of these advantages has felt quite overwhelming, to say the least. And I've adopted AI tooling like ChatGPT and other models and Copilot into my workflow for quite some time now, and I've seen this tooling get better and better and better with each release.
01:14 Laila Bougria
Now, more recently, this tooling is now adopting what is called agentic AI, where basically we can just use an agent, send it a couple of prompts, and it will just generate a bunch of code for us. So we can just sit back, relax, review all of that code, maybe even ask Copilot to write the commit for us, open the pull request, and take it from there.
01:39 Laila Bougria
But that's not even all because more and more tooling is now supporting what is called vibe coding. Who's heard of that before? Yeah. It's been pretty hard to miss, right?
01:51 Laila Bougria
Well, for those of you who may have missed it, vibe coding is a trend in which we will just fully embrace exponentials, go with the vibes, and then fully embrace the exponentials and forget that the code even exists. So if something goes wrong, no problem, we'll just ask AI to fix it for us.
02:13 Laila Bougria
And this is gaining massive traction, especially with tools like Lovable and Bolt and Windsurf, and even our trusted VS code fully embracing this model. So now, we can basically generate entirely new applications and entire new feature sets just based on a couple of user prompts making AI the pinnacle of productivity in terms of building applications.
02:41 Laila Bougria
AIs can significantly boost the amount of code that we produce. And isn't that amazing? Finally, we can close all of those issues that we had waiting in our backlog and that we didn't really have the time to get to. Finally, all of those ideas that we had and what to incorporate into our systems, they don't have to wait anymore. And whenever a stakeholder has a new idea, we can just come in and immediately incorporate that because now that we're vibe coding it, the code basically writes itself.
03:15 Laila Bougria
But that begs the question is code productivity our main goal?I'm a little bit curious, is there any one of you here paid per line of code you produce? No? Okay. Maybe per commit? No. So maybe per PR that you merge? No hands. So you are telling me that you're not even paid per release that you push to production? I can hardly believe it.
03:48 Laila Bougria
Many have tried and failed to measure engineering productivity based on code artifacts alone. And there's a very simple reason for that, because there's a very big difference between the activity of coding and the craft of building production grade software. I'm getting excited. So let me turn this off.
04:09 Laila Bougria
Because a programmer can basically write some code, make that run and move on. However, a software professional, like all of you are, far exceed that task because we design our software for simplicity, for evolvability, and then we go and architect our systems for reliability, for scale, and for consistency, or we also decide when that would be absolute overkill to do.
04:38 Laila Bougria
Then we go and test our systems for resilience, for performance and for accuracy. And we will also make sure that our system adheres to any applicable compliance rules and that we protect it against breaches. And when we have a new version ready, then we will push that to production without introducing any downtime and while seamlessly upgrading all of the backing infrastructure without anyone even noticing.
05:08 Laila Bougria
And when that new version is live, then we will keep monitoring what's happening in the production environment, gather some feedback, and continuously improve our system. And these are things that we have learned over years and years and years of insights over 24 hour outages and bugs that we've had to fix over long weekends with nothing but a pot of coffee and some leftover pizza.
05:33 Laila Bougria
But we can also start to see that AI is becoming better and better at these pieces of the puzzle of building complex systems, but it's not particularly great at the entire process and not to the levels of depth that are required to build the complex systems that we are building today.
05:52 Laila Bougria
But as AI continues to improve it more and more of these puzzle pieces, we become exposed to some very large but not immediately visible risks. And one of those risks is over-reliance. Now, I'm not telling you this based on some anecdotal evidence. There's plenty of studies out there that have already proven this again and again. And if you're curious to read them, look out for my resources at the end.
06:20 Laila Bougria
But the thing is that over-reliance leads to disengagement. And disengagement is in direct conflict with how we learn. So let's take a step back and actually go through how humans learn, because if we want to learn a completely new concept, that basically starts by paying attention. Because think about it, we are exposed to a massive amount of information on a daily basis.
06:47 Laila Bougria
So the only way for us to deal with that is to focus on what we believe matters most and then just ignore all of the rest. And that basically makes attention the prerequisite to learning. But, of course, just paying attention alone isn't enough. It's important also how we engage with that information, because when we are just paying attention, we are in what is called receptive mode in which we assume that all of the information that we're exposed to is valid.
07:16 Laila Bougria
And in that mode, our learning remains limited because to truly fully grasp a completely new concept, we also need to engage with that information more and go and validate whether that information is actually accurate.
07:30 Laila Bougria
So let's say that you're here at the conference and you're now in a session being introduced to a completely new framework that you hadn't heard of before. Active engagement is about the questions that you ask yourself about this, "Oh, what problem does this solve? And do we have that problem in our system? What type of friction are we experiencing today?"
07:54 Laila Bougria
And if we would introduce this thing, would all of that friction be addressed or would we be introducing any additional side effects? And then also when the conference is done, going back to your keyboard and actually testing out all of these hypotheses.
08:11 Laila Bougria
And as we're doing that, sometimes, we will also run into errors. We say this all the time, "Oh, we learned from failure." So whenever you see someone that's shouting at their screen, that's the sound of learning happening in real life. But what is really interesting to me is that we don't only learn from failure. We actually learn from anything that has any element of surprise.
08:36 Laila Bougria
So imagine that you're fixing a bug and you're doing the TDD styles. So you already have a failing testing, you're trying to make that green, and you're not quite there yet, but you feel stuck. So you're like, "Let me just quickly run the test to see how far along I am." You run the test and it's green and you're like, "Wait, what? What just happened?"
08:55 Laila Bougria
So then you go into the code and figure out why the thing you just did fix the issue. Now, in this process, no error happened. But because there was that element of surprise, you are still learning deeply. But, of course, to fully, fully grasp a new concept, we also need to consolidate all of that information. And we humans have a magical process to do this. It's called sleep, one of my favorite activities because, as we sleep, our brains are basically replaying back and forth everything that happened to us throughout the entire day.
09:34 Laila Bougria
And as our brain does that we can basically gain completely new insights and reach entirely new breakthroughs. It's happened to me over and over again where I spend an entire afternoon frustrated at my computer trying to fix an issue, and I'm like, "I can't figure it out." And I'm like, "You know what? I'm just going to go to sleep. It's fine. We'll look at it tomorrow." And then I go and dream up the entire solution, wake up, and I'm like, "Fine. Now, I can go and fix it all over again." Sounds familiar? Yes, many hands.
10:08 Laila Bougria
So the thing is that many studies have already shown that with diminished sleeping, the amount of learning that we can do also significantly goes down. Even without studies, we already know this. Now, together, these make up the four stages of learning as described by Stanislas Dehaene, a neurocognitive scientist that wrote the book on How We Learn. Highly recommended, by the way.
10:31 Laila Bougria
But you may be asking, "But. Laila, this is interesting, but how is this relevant?" Well, the thing is that we have proven time after time that if you put something into our hands that allows us to quickly figure out how to do something, then we will stop trying to learn how to do that on our own.
10:50 Laila Bougria
From pulling out my phone to do just a simple computation or quickly Googling how long exactly I need to boil a soft -boiled egg or even relying on my GPS to remember where the hell I parked my car this morning. And this is not something that happens from one day to another. This is a transition that is seemingly unnoticeable until one day our battery dies, the internet connection is flaky, and we are literally lost. Now, what is happening behind the scenes is what we call cognitive offloading.
11:27 Laila Bougria
Basically, the act of delegating brain-intensive tasks to tooling that can do this for us. And to be clear, this is not necessarily a bad thing. It can actually be incredibly helpful, especially so that we can free up some cognitive resources, and that allows us to focus on the more crucial problems that require more of our deep thinking.
11:49 Laila Bougria
However, the conflict that we are starting to see now is that we start to offload more and more crucial tasks to AI tooling. And that, in time, can erode our learning and eventually even our ability to operate without these tools. And these effects are slowly becoming visible in the software industry as well.
12:12 Laila Bougria
GitClear has been reporting on software metrics for quite a couple of years now. And in 2020, they analyzed all of the code bases that they had access to, and they found that 24% of that code had been refactored, restructured, moved around, and all of these things, whatever refactored code looks like, whereas only 8% of that code had been copy-pasted.
12:38 Laila Bougria
Last year, the results were a little bit different. They ended up analyzing a staggering 211 million lines of code, and they found that only 9% of the code had been refactored, whereas 12% of the code had been copy-pasted. So this means that for the first time ever, the amount of copy-pasted code is exceeding the amount of refactored code.
13:07 Laila Bougria
And this is especially important because it's happening at a time in which they are witnessing an increased adoption of AI tooling. So what does that mean? Well, it means that we are basically engaging with our code list. We are rethinking all of that list. And this is where I want to remind all of us of our practices from refactoring to principles like YAGNI or KISS or any of those things. Where did they come from? They've emerged from messy code bases that were way too complicated to fit into our cognitive windows.
13:49 Laila Bougria
But with AI tooling, we risk slowly shifting into auto-tabbing any auto-complete proposal from AI without significantly engaging with the consequences of that code, and that risks the butterfly effect where innocent co-generation can lead back to the thing that we've been trying to avoid for years, all of these big balls of mud that don't fit into our heads.
14:16 Laila Bougria
So that's why it's important that we're aware of what exactly we are offloading because the goal of co-generation is to gain speed on the seemingly mundane while still relying on engineers to deal with all of those exceptional cases that AI can't solve yet.
14:34 Laila Bougria
But as researchers recently found doing a study on critical thinking in the age of AI, they found that the irony of automation is that we lose all of those opportunities that strengthen our muscle memory, and that leaves us atrophied and completely unprepared when those exceptions do arise.
14:57 Laila Bougria
Now, if this doesn't sound alarming to you, let me be very clear, okay? Think about all of the AI tools and the agents that are currently out there. They're very carefully branded. They're co-pilots, not pilots. Every chat tool, every code completion tool that is based on AI is full of disclaimers, "Please verify my output because I can make mistakes." And the truth is that whether you use the AI tooling or not, at the end of the day, you are the one that's committing the code. You are the one that's approving the PR, and you are the one that's pushing that code to production.
15:40 Laila Bougria
And the hard truth is that you are responsible when failures happen in production, when your database is down or where there's a security breach detected. You're also responsible when those requirements haven't been fulfilled correctly or when code complexity skyrockets just at a time where there's an issue that AI can't fix.
16:05 Laila Bougria
Think about this for a moment. There's currently no company out there saying, "You can trust our AI. Just use it, and we guarantee that it works because if something goes wrong, we'll come and fix it for you." Yeah. That's not what's happening, is it? At the end of the day, we are still responsible. And as long as we are, we also need to ensure that we are maintaining all of those skills and all of that knowledge that is needed to deal with all of those exceptional cases.
16:40 Laila Bougria
Now, at this point, I'm sure you're convinced that I'm some kind of AI contrarian that is telling you to avoid AI tooling like the plague, and that's actually not my intent at all. As I already mentioned, I've incorporated AI tooling into my workflow for quite some time now, and I continuously find completely surprising way for it to help boost my overall output.
17:01 Laila Bougria
But that's exactly the key. I'm aiming for it to boost my overall output. Code artifacts are just a part of that. With every new tool that we have. The right question to ask ourselves is how do we use this appropriately? So in that light, I would like to walk you through how I've incorporated AI tooling into my coding workflows today. And that starts with prototyping.
17:28 Laila Bougria
Now, over the years, I've actually adopted a habit of brainstorming over possible solutions long before I touch my keyboard, and I highly recommend that you try it. But sometimes I'm looking at all of those alternatives, and I'm like, "The differences are very subtle for me to make a decision, or there are way too many unknown unknowns."
17:49 Laila Bougria
And this is where AI can be a magnificent tool to quickly go and prototype all of those things so that I can explore those unknowns. But at the end of that process, I will end up undoing all of my changes, closing the PR and throwing all of that code away.
18:07 Laila Bougria
And this is exactly where vibe coding has a place. It can help me speed me up. I'm just trying to improve my decision-making. And at the end of it, I will throw away the code. And by the way, this is exactly how Karpathy intended vibe coding for the throwaway weekend projects and prototyping. But, we can't expect people to read entire tweets. That would be absolutely outrageous.
18:34 Laila Bougria
Another use case in which I use AI as a rubber duck. Like I said, if I'm building a new feature or fixing a bug, I will try to first go at it on my own, but AI can be a magnificent rubber duck to figure out whether there are better ways in which I could have done this or other ways that I hadn't even considered.
18:52 Laila Bougria
And I don't have to book a meeting with AI. So amazing, right? However, just like when you're rubber ducking with a colleague, you wouldn't use that as an opportunity to let them do the work. You don't have to do that with AI. Instead, you can use this as an opportunity to find gaps in your thinking. And like I like to say, be a sponge. Soak up all of that knowledge, all of those insights, and use it as a way to further your own horizon.
19:22 Laila Bougria
Another use case in which I use AI is as a pairing buddy. Now, Copilot can be a great pair programmer, and we've been doing pair programming for years, right? And we know that if we want to do that efficiently, sometimes, you lead. And sometimes you follow, and I tend to keep to that exact practice when I'm using Copilot as a pairing buddy as well.
19:43 Laila Bougria
So 50% of the time, I will turn on agent mode, let it generate the code, give it some user prompts, and all of that. And I'm there looking at the results, trying to see whether this introduces any gaps, whether it could be simplified, whether this would actually work, if it would maintain under load and all of those things.
20:02 Laila Bougria
And then the other 50% of the time, I will completely turn off or ignore the agent. I will ignore any auto-complete, and I'm there writing the code by myself so that I can keep exercising that coding muscle. And, of course, this doesn't replace pairing with your actual peers because there are intricate details in the software that any AI tool can't easily extract from it, technical details or things in the business. So it's still important to keep pairing with your peers who have that shared context with you.
20:39 Laila Bougria
Finally, if I have something that's ready for review, I could also ask Copilot for a review, and this can be helpful, especially if you also supply it with your coding STEAM guidelines and the engineering practices that you're trying to adhere to. But at the end of the day, it's not really giving me what I would expect from a PR review from one of my peers, but this is where it can be interesting to go into GitHub Copilot and start to interact about that PR and ask it questions.
21:09 Laila Bougria
Are there sufficient tests introduced for this scenario that we have changed here? Could this code be simplified? Is this introducing any duplicate code? How would this perform under loads, find any possible performance gaps that we could start to improve? And that basically helps me take out all of these rough edges. And after that, I will still ping one or probably more of my peers for a thorough review.
21:37 Laila Bougria
Now, these basically shed some light on how I've incorporated AI into my day-to-day coding workflows, and it may appear as if I'm not engaging sufficiently in that cognitive offloading and not fully using all of the capabilities of AI. But I just explained to you how I'm using the tooling. And another very important angle is also where I'm using the tooling.
22:01 Laila Bougria
And to do that, the best way I can do that is to introduce you to a concept from domain-driven design. Now, according to domain-driven design, every large system is composed of multiple subdomains. And each individual subdomain can be classified as either a generic subdomain, a supporting subdomain, and a core subdomain. Who's already heard of that? Okay. Many hands.
22:24 Laila Bougria
Well, basically the idea is that the core subdomains are at the center of everything, right? These contain your competitive advantage, what is driving the success of your organization or, like I like to say, where the money is made.
22:38 Laila Bougria
Now, these subdomains evolve often and, therefore, they require most of our attention, and this is exactly where you want to be very careful in that offloading because these are the strategically important parts of your system. So you want to thoroughly understand them and know how they work when things go wrong. Now, you can still use AI tooling to support those activities there in the ways that we just covered, but you want to stay fully in control and entirely in the loop.
23:09 Laila Bougria
Next, we have our generic subdomains. Now, these are considered the solved problems, but we keep on running into them in every system over and over again. You need identity management? Well, I think there's probably a couple of good solutions over here available.
23:25 Laila Bougria
You need an observability solution? Again, there are so many great options out there. You need a way to reliably communicate over messaging? Fine, but please don't reinvent the wheel. And I get it, right? This is where it can be very tempting to use AI tooling to generate that code for you because, at the end of the day, it's a solved problem. There's probably already open source solutions about that also available online.
23:56 Laila Bougria
But think about this for a moment because, for you, this is a generic subdomain, doesn't have any competitive advantage. But these companies that are out there that are offering battle-tested up-to-date solutions with 24/7 support and documentation, they have taken what, for you, is a generic subdomain and they have made it their core subdomain, their competitive advantage. So if it's not your core subdomain, buy. Don't build because it will end up costing you significantly less in the long run.
24:34 Laila Bougria
That brings us to the supporting subdomains. Now, these support the organization's business but don't really provide any competitive advantage, but we end up building these parts manually because there aren't any good generic solutions out there in the market that address those needs that we have.
24:50 Laila Bougria
But these are also usually a bit simpler. Now, this is exactly where it can be much safer to rely on AI tooling more extensively because think about it, usually these are simple business rule and some crowd operations. And that makes them much more suitable for co-generation to begin with. So this is where you can engage in that cognitive offloading so that you can gain speed and focus on what matters most, your course of domains.
25:23 Laila Bougria
Now, this way of working may seem like reasonable, especially to all of you seniors that are listening to me here today. But I still think it is really important that we build up practices and guidelines on how we use AI, how extensively we use AI and where, because as our trust in this AI tools are growing, it will take us more conscious effort and more discipline to keep verifying all of that output because the build didn't break the last 10 times.
25:56 Laila Bougria
But this begs the question, how does this affect the more inexperienced members of our teams because we had the opportunity to learn the old way, right? But juniors, they face a complicated reality because they are entering the industry at a time where there's an overchoice of frameworks and languages and architectural styles, and they hear about silver bullets left and right.
26:24 Laila Bougria
At the same time, they're entering the market at a time where most companies are focused on efficiency and saving money. So junior engineers can quickly become overwhelmed, but that's okay. We have AI to the rescue, right? That makes me sort of think back to when I was a junior engineer fresh off the university shelf. I could read code, but I didn't thoroughly understand it, even though I kept convincing myself that I did.
26:57 Laila Bougria
In my view, reading code is much like reading a language. Let's take a look at the following sentence. I never said she stole my money. Everyone understands this? Yes. But the thing is that the meaning may be completely unclear from reading the sentence alone because the meaning is given by how we speak, the words, the nuances that we place, and what exactly we're emphasizing. Let's run through this.
27:26 Laila Bougria
I never said she stole my money. Someone may have said it, but it wasn't me. I never said she stole my money. I haven't made this claim at any point in time. I never said she stole my money. I may have thought it, implied it, but I never said it. I never said she stole my money. Someone else may, but it wasn't her. I never said she stole my money. Well, maybe she borrowed it. And this continues. You get the point, right?
28:01 Laila Bougria
The ability to read a language does not equal understanding what is trying to be conveyed by the person who's saying it. And the same is true for our code. Being able to read a code block is not just about understanding that thing, but about also understanding, "Is this introducing any duplication? Could this be simplified in any way? Are we dealing with a leaky abstraction? Is this introducing any accidental coupling or some undesirable complexity that we really don't want to have?"
28:34 Laila Bougria
And this is exactly the muscle memory that we've built up over years and years and years of writing code and fixing both our own and other people's messes. But in today's world of AI, junior engineers risk becoming deprived of this experience. And it's actually much worse because in today's world of AI, junior engineers risk becoming deprived from a job altogether, especially with large corporation CEOs and influential folks making claims that AI can easily replace your junior workforce.
29:15 Laila Bougria
And the impact of these claims is incredibly large because they arrive at a time of great economic uncertainty. And we know that that's tightly coupled with a focus on efficiency and saving money. And this statement that AI can easily replace your junior engineers, that might well be true on a superficial and short-term level.
29:36 Laila Bougria
But think about this. Today's juniors are tomorrow's seniors, and we are already seeing more and more companies reducing and even completely eliminating hiring more inexperienced profiles. So I want to take this opportunity to remind all of you when you go back to your organizations to tell everyone that AI doesn't currently have better ideas than whatever its training data contains. And we need fresh ideas because coming up with creative new solutions is still up to us.
30:14 Laila Bougria
I think it's really important that we welcome and train the next generation of software professionals that we teach them how to use AI appropriately while still allowing them the space and the time that they need to learn like humans do, with time, with practice, and above all with trial and error.
30:37 Laila Bougria
Now, up until this point, we have been discussing the risk of over relying on AI tooling, right? But that's not all, because as I've already mentioned, our role in the industry exceeds mere code artifacts because the code that we are paid to build on a daily basis, it's not just code. It's a product. And this product may be serving users that are internal to your organization, or maybe it's an actual product that you sell to users to a specific segment or even worldwide. It doesn't really matter. What I'm trying to convey is that at the end of the day, the software that we write is there to solve a business problem.
31:18 Laila Bougria
It's there to address some needs that its users have. And I think it's really important that we rephrase our roles and to think of ourselves as more than just people who produce code, but rather focus on the fact that we are there to facilitate solving business problems.
31:37 Laila Bougria
So to do that, let me quickly rewind to something like 15 years ago, probably. At the time, I was working as a consultant and I had already worked in quite a couple of organizations. And every time, I would come into a new organization. They would introduce me to the software structure, to what we're trying to solve, to the engineering practices, and to the way that they work.
31:58 Laila Bougria
And within a week or two, I would find myself in meetings with business stakeholders in which we were handed the requirements. And, of course, absolutely nothing was written down most of the times. So in the early days of being in an organization like that, it really felt like I was drinking from a fire hose with terms being thrown around that I had never heard about before.
32:20 Laila Bougria
And at the end of the meeting, I felt like I had many more questions and not that much more clarity. "Oh, you have more questions, Laila? That's totally fine. Feel free to book a meeting with me." "Oh, thanks. That's nice." So off I was again to my computer, opened up Google Calendar only to find like this, because the reality is that the people who hand us the requirements in most cases aren't doing this as a full-time job.
32:49 Laila Bougria
No. I mean, come on. They're cramming in one, maybe two meetings a week to hand us just enough breadcrumbs so that we can make progress to the next meeting. And to be clear, this is not because they don't care, because look at how much other stuff they have to do, but there's a silver lining because at least they told me what the most important next thing was to focus on.
33:12 Laila Bougria
So off I was again to my keyboard to try to build something. And as I was really trying to build something that didn't just work but was also well-designed and simple and intuitive to our users, I kept running into constraint after constraint and question after question. And as I looked at my keyboard, all I could think of was, "Why? Why does it have to work this way?" What is the underlying problem that we're trying to solve? And how does this relate to the last seven features that we shipped in lost release? What could fail from a business perspective? And what is the user then supposed to do?
33:54 Laila Bougria
From my perspective, I was trying to make sense of what was being asked of me, but me asking questions all the time wasn't always appreciated. Oh, come on, Laila. Can't you just keep to the requirements? Why do you have to keep asking all of these difficult questions?
34:11 Laila Bougria
So sometimes, I questioned myself, maybe it's me. Maybe it'll all make perfect sense in a couple of weeks. But you know what? After two decades in the software industry, I can tell you that the results did not lie because those designs came out overly complex, buggy, and most importantly did not make the user's life any easier.
34:32 Laila Bougria
So one day, I gathered up all my courage, walked into one of the business stakeholders' office, and I said, "Look, I can keep on delivering new feature by feature, but all of these important connections in the domain are being uncovered way too late, that's causing a lot of complexity in our designs. And then we have to go and refactor, and it's slowing us down. And I truly believe that this could be partially mitigated if we had more access to you, so that we can shorten that feedback loop."
35:00 Laila Bougria
But we were facing a conflict because I'm asking more of their time. They don't have any. So after a very long conversation, we settled on a completely new approach in which my team and I moved out of the IT department and into the office where the system users were. And doing that completely reshaped our understanding of the domain, of our users' needs and the problems that they are facing.
35:25 Laila Bougria
And that led to better software because it allowed us to transition from what were messy, illogical and sometimes counterproductive solutions that didn't look much better than a mishmash of best intentions to solutions that our users loved.
35:42 Laila Bougria
And again, you may be asking, "But Laila, what does this have to do with AI tooling?" Well, think about this for a moment. We find ourselves in a time where these tools are becoming better and better at building things with very little context. And as we are engaging less with that code and we become more detached from that creative process of writing all of those things, we are basically not running into all of those questions anymore. So we let AI fill in those gaps in the requirements.
36:14 Laila Bougria
And I get it, right? We are constantly being encouraged to empty that backlog as quickly as humanly... Sorry, I meant artificially possible. But if we remain stuck in this cycle of control, shift, build, control, shift, build, control, shift build without taking a moment to hit pause, then the solutions that we end up building might quickly become much more messy, much more illogical and orders of magnitude more counterproductive.
36:47 Laila Bougria
I have said this over and over and over again, but our industry is overly solution-oriented where we would thoroughly benefit from becoming more problem-oriented and more domain oriented. Already, today, most engineering teams are producing code way too early in the process, like long before they understand their user needs or have taken the time to appropriately define the problems that they're going to solve. So to be clear, this is not a new problem.
37:17 Laila Bougria
We have had this problem for years, but it's being exacerbated. Over the last decades, we've seen new and new practices emerge that pay more and more attention to these non-technical concerns from changing the way we think with systems thinking to focusing on user needs with wildly mapping or even building ubiquitous languages and practicing domain-driven design.
37:42 Laila Bougria
More and more voices are converging around practices that encourage us to slow down and ensure that we're building the right things. But AI is now introducing a tool that has the ability to quickly come up with what appears like working solutions, acting like a requirement fulfilling machine like we've never seen before.
38:06 Laila Bougria
And the most acute problem that we are facing today is that AI will do exactly what we ask for. Just like a genie from a bottle, we'll execute on our every command. It's not going to take any time to push back and say, "Hey, have you taken the sufficient time to figure out whether this is even the right thing to ask for?" No.
38:30 Laila Bougria
AI does not push back. It does not question what we ask for. It's actually so much worse because it will come in, confirm our biases, and reaffirm how amazing our ideas are. And if you think I'm being sarcastic, just a couple of weeks ago, OpenAI had to roll back their latest 4.0 update due to sycophancy, a term that honestly I had to look up. I had never heard it before.
38:58 Laila Bougria
They basically described their model as overly flattering and agreeable. And the thing is that as we basically become detached from coding and from that creative process that is involved in doing exactly that, then we are basically not surfacing all of those errors and those surprises that are sparking our learning. And we are basically losing that feedback loop.
39:27 Laila Bougria
Many companies out there are very successful within markets, but there's always one that tends to lead the market. And there's one reason for that. Because they're doing something different, something about their offerings, something about the way that they do business is making them stand out from their competitors.
39:48 Laila Bougria
And in the age of AI, I think it's absolutely essential for us to understand exactly what that context is. We should be chasing it because one very underestimated role of software professionals is to actually reconcile that context, that competitive advantage with the features that we build into our software.
40:11 Laila Bougria
But what I tend to see is actually the opposite where stakeholders will walk up to a team, ask for a new feature, and they face a big, "Nope. Sorry. We really can't do that because that requirement does not fit into our design. We would have to rewrite everything."
40:33 Laila Bougria
But if that's true, then that's our problem, isn't it? Why are we making it theirs? One of the things I've seen is that this sort of behavior indicates that teams are overly focused on the design of their code and insufficiently focused on the business that they're supposed to facilitate to begin with.
40:53 Laila Bougria
And it's important that we think with our stakeholders, because we are the ones bridging the gap between what it is that they need and what is technically possible. And, therefore, it's our responsibility to think beyond what they ask and focus on what they need.
41:09 Laila Bougria
To be clear. I'm not telling you, you shouldn't be pushing back on anything. Actually, the opposite is true. And those of you who have worked with me know that. But why we push back matters? Are we pushing back because we don't like what this feature would do to the design of our code, or are we pushing back due to user or business-centric reasons? Because the teams that push back mostly due to user-centric concerns end up crafting software that is much more aligned with the business to begin with. And therefore, when a new requirement comes in, it is much less likely to completely break the existing design.
41:51 Laila Bougria
So how do we get there? Well, to do that, I think we need not to take a couple of steps back, but actually a couple of miles back. I would like you to take a second and ask yourself, "What is your organization's mission?"
42:09 Laila Bougria
Well, many people can describe more or less what their organization does, but very few people understand its mission like what's the reason for its existence? What is it trying to achieve besides making money? At Particular Software, our mission is to make companies better at building, maintaining, and running complex software systems. That's it. That's our mission.
42:37 Laila Bougria
And it's important to understand that why? And you may be thinking, "Oh, come on Laila, I'm not some kind of C-level executive or strategist." But really think about this for a moment. If we want to get better at pushing back for the right reasons, if we want to make better decisions about what features we build, how we build them, and how to design our systems, then we need to have that context because when we do have this understanding, it allows us to zoom out from seemingly disconnected requirements and see things as part of that big picture.
43:11 Laila Bougria
It helps us also to better understand our user's journey, like where they're coming from, what hurdles they may be facing along the way and where they are going. It's our responsibility to understand all of those things. So look for that backing story, the one that's hidden behind the requirements, and storify that journey because doing so also allows us to approach our users from a place of empathy, which allows us to contextualize their need and uncover, sometimes, completely new use cases that we hadn't thought of, but maybe even our stakeholders hadn't considered before.
43:50 Laila Bougria
Now, you might be thinking, "Well, Laila, we don't do any of this. and we've been doing perfectly fine." And yes, I've seen that too, but what I've also seen in these situations that what you probably have not yet noticed is that there is someone in your organization or probably a few people who are doing this job for you. I call those people glue, people who are constantly running around from meeting to meeting, seem to know everyone, seem to be involved everywhere, but they don't seem like they have much output. I mean, they're not producing code, not closing many issues. And they just seem to be running around.
44:36 Laila Bougria
But what they're actually doing is that they're facilitating the work of others. So if you just felt like everything I just said is irrelevant, then this is why. But this is not an individual's responsibility. More than ever, we need to acknowledge that the software systems that we build are socio-technical, and that means that there are human factors and organizational processes and even some politics that end up deeply intertwined into the systems that we produce, because the only system that we can create is one that is aligned with our view of that system, our view inside our bubble. And that's why we need to look at it from a shared perspective, and we need to have multiple people who care about these things.
45:24 Laila Bougria
And with that diverse thought, we can end up building systems that aren't only technically sound, but also align with real world needs. Now again, Laila, how does this connect to AI tooling? Well, it's simple. It doesn't. This is the thing that AI can do. This is where AI needs us to reach its full potential because we are the ones bringing all of those insights about our mission, about the problems we're trying to solve and our users' needs and our domain.
46:01 Laila Bougria
We are the ones bringing that information. Understanding all of that context also enables us to ask better things from AI and to also validate whether that output is actually aligned with all of these concerns, because, remember, prompting is AI's way of letting you know how sloppy your requirements are, because AI will give you exactly what you asked for based on the context that it has, which by default is absolutely no context, nothing, nada.
46:35 Laila Bougria
And we have run into this more so many times where we ask something from an AI tool and you look at the output, and you're like, "Come on, that's not what I asked for." Well, that may well be true, but there's probably a bunch of context that's running in the back of your mind that you haven't transferred.
46:54 Laila Bougria
And what does AI do then? Well, simple. It will uncreatively fill in all of the gaps in those requirements by generating the most probable next token based on its training data. And just like that, you have taken your organization's competitive advantage and thrown it out the window.
47:20 Laila Bougria
By the way, this is also why it's really important when you are sharing this type of context about your mission and your user's problems and all of those things with AI tooling that you disable model learning because you don't want to go and throw out your competitive advantage into a model that anyone else can use.
47:39 Laila Bougria
But what I'm trying to focus on here is that more than ever, this is part of our journey too, understanding more and more that domain and all of those non-technical concerns. And that journey starts with curiosity.
47:55 Laila Bougria
So let me tell you a story about a very young and innocent me. As you can see by my slightly sunburnt face, I used to spend most of my summers in the warmth of Spain and Morocco, and I used to spend entire days with my parents at the beach. And one of those days, I completely wandered off and I was gone for a couple of hours.
48:20 Laila Bougria
So when I came back, my dad was nearly losing it. He was out of his mind, so worried. And I don't remember exactly what I said to him, but I vividly remember how he reacted. So it must have been something that he perceived as incredibly disrespectful. So he got down on eye level, looked at me, and he said, "Laila, be very mindful on how you treat me because the way you treat me today is exactly how your children will treat you later when you are the parent."
48:58 Laila Bougria
I let that sink in for a moment. And then later that evening, I went back to my dad and was like, "Dad, about what you said earlier that my actions will be reflected in my children's actions. I really don't get that." And he's like, "What do you mean?" And I'm like, "Well, according to what you just told me, you must have treated your parents exactly the way that I did or, otherwise, I would've never behaved the way I did."
49:29 Laila Bougria
Two things happened that day. I understood recursion. And without even realizing it, the software engineering me was born. But what's even more important is that I realized that there was value in my curiosity and in asking the hard questions because oh, my dad was irritated. But as I could see from the slight smile that escaped his face, he knew I was right. And this led to much better conversations about respect that I could better fit into my head.
50:03 Laila Bougria
Luckily, we are all born curious, but the reality of society is that this is also not always encouraged, from our parents that get incredibly overwhelmed by us asking why all the time about everything. I'm experiencing that with my lovely kids now to maybe teachers who insufficiently nurtured this trait or even at the workplace where we're told to our own lane.
50:29 Laila Bougria
So it's actually quite easy to start losing some of that curiosity along the way. And I feel very lucky that I realized early in my career that whenever I sort of let go of that curiosity when I silenced it, that the quality of my work degraded. And I also saw that when I engaged in curiosity, when I was curious also about the non-technical domain and all of those things, that the quality of my work improved significantly.
51:01 Laila Bougria
And this brought me many advantages both during my work day and after. So I used to work at a bank for five years, and I wrote all of the calculations for the mortgage domain. Guess what happened? Shortly after that, we bought a house. And when I went to the bank to negotiate that mortgage, oh, I was a force to be reckoned with because I had insights on the business domain, what was possible and what wasn't possible. So I came out winning.
51:34 Laila Bougria
And you may be thinking, "Well, lucky you, I don't work at a bank." But think about this for a moment. All that business knowledge that is part of your domain may become valuable to you in ways that you can't even anticipate. Think about what could happen if you approach all of that knowledge with curiosity because, at the very least, it will allow you to level up as a software professional by bridging those business to technology gaps.
52:08 Laila Bougria
It will allow you to enable in critical thinking and push back and say, "No. Better this feature, better that feature, or we're going to do this this way because of these and these and these reasons." It will also allow you to basically engage with AI tooling in more meaningful ways that will speed you up not only in the short term, but in the long term too.
52:33 Laila Bougria
Now, we're slowly closing in on the end. But before I round up, I want to take a moment to address that big elephant that's been staring at me from the back of the room because I'm sure that you have been asking a question in your mind throughout this entire session. But Laila, what happens if AI becomes great at all of the puzzle pieces of building complex systems?
53:02 Laila Bougria
And I have asked myself this question numerous times and even more so as I was writing this session, as you can imagine. But the thing is that even though that's a really important question to ask and for us to think about, it's also outside of our control.
53:19 Laila Bougria
And what I believe is that right now we are in the transition period. We are basically in the middle of a tunnel racing at extremely high speeds, but we don't know how long the tunnel is, and we also don't know where exactly it will end up. All of these AI tools and backing models continue to be scaled and tuned, and they are improving every time.
53:43 Laila Bougria
And the race towards AGI and super-intelligence is still going strong, even if we haven't actually defined what that means yet. But we don't know if that will happen or when it will happen. And therefore definitely cannot predict how this will change everything. Maybe, all governments get totally freaked out and they put a lid on it. Nobody uses it, or maybe everyone gets access to it and that makes us run into energy or computing limitations.
54:14 Laila Bougria
Or maybe we end up realizing that intelligence isn't artificial at all. We just don't know. And that's why those questions don't matter just quite yet. I would like to take this opportunity as I'm closing off to ground you all in the now because what history has shown us over and over and over again that with every great paradigm shift, we tend to overestimate its effect in the short term and completely underestimate it in the long term.
54:45 Laila Bougria
And that's why I encourage all of you to investigate and educate yourself into the capabilities of AI while being very aware of its limitations. Think about how you could be changing your work to incorporate AI tooling into your daily workflows while you are ensuring that you are not falling into those traps with all of those negative side effects.
55:10 Laila Bougria
So as you are here at the conference, go forth, learn about all things technology and AI. And when you're back at your work at your keyboard, continue asking why. Stay curious. Keep on learning. And above all, continue to enjoy the ride.
55:31 Laila Bougria
Now, of course, for all of the curious minds amongst you, I have plenty of resources available from you, from books to studies and research, and interesting blog posts that you can read about. Of course, I will also be here throughout the entire conference. You can find me at Particular Software's booth, and definitely come and ask me any questions you have or just come say hi. Enjoy the conference. Thank you so much.