Picnic 10 years: 2020 — Sudo pick me a sandwich

Written by Sjoerd CranenJun 17, 2025 06:5216 min read
1*cTOlOA5DOpTwOym09QP0yg

Picnic 10 years: 2020 — Sudo pick me a sandwich

In this edition of the blog series about 10 years of Picnic, we take you back to December 2020, we’re in the middle of the sixth year of Picnic. The “FCA” project is in full swing, and with the holidays around the corner, announcing the start of the year in which it all has to come together, things are feeling a bit tense.

FCA is our automated fulfilment centre (FC) in Utrecht. Ever since the beginning of Picnic we had been dreaming of an automated operation, and now it was coming to life. No longer would we have to walk around an FC to find the sandwich to pick for our customer; instead, the sandwich would come to us, assisted by 13 kilometers of glorious conveyor belt.

The name itself, “FCA”, is already a bit of an exception, and in hindsight maybe a sign of things to come. We had believed that FC0 would be our “test FC,” and FC1 would be our first automated operation. By the time we got to opening FC3 — a manual operation — the working title was FCA, and by the time it went live, FC5 had been operational for 6 months.

So in December 2020, we are ready to admit that perhaps our initial target of automating FC1 had been a bit too optimistic. We have spent 1600 work days out of what would eventually turn out to be a 4800 day tech project. The steerco slides are not looking very promising. And along the way, we are finding out that for this special project, we need to make some special adjustments to our thinking:

  1. Maybe the waterfall model was still relevant, whether we liked it or not.
  2. Perhaps monoliths were more useful than we’d given the credit for.
  3. Maybe “small, independent teams” would be less effective on this project.

In this story we’ll explain a bit more about how we handled these topics, and tell you how it all panned out in the end.

An agile waterfall

However, December’s steerco presentation shows us a planning that looks remarkably like a waterfall project. It’s divided into 34 milestones, of which 1 has been completed on time, 4 have been completed with delay, and 4 are in progress, planned to complete this year, and highlighted in red to indicate the infeasibility of that plan. 25 milestones have yet to start.

What happened here?

December 2020: not a lot of green in the planning sheet

Picnic has its roots in e-commerce, and works agile. The steerco slide shows that such a 2-year giant plan was not something we had a lot of experience with. It conflicted with our beliefs of what “good planning” looked like. Ever since the start of Picnic, our systems are developed iteratively, in weekly or biweekly sprints. We have a strong focus on delivering MVPs and adding features as required. This project required a different approach, because it was really different from what we were doing before.

When we started, we had full focus on our storefront. “The cloud” was still relatively new, as were microservices and schemaless databases. We were doing all those things, we were trying to “move fast and break things”, and we relied on third party products for planning and executing our fulfilment and distribution operations.

In year two already, we had seen that optimised fulfilment operations were core to our success, and we had decided to build our own warehouse management system (WMS). We had been able to do that by taking over features from the third-party product one by one, until we were no longer dependent on it. This approach allowed us to follow a similar agile approach to building a WMS as we had taken to build a store, and it bought us some time to truly understand the domain.

This automated warehouse, however, was a very different kettle of fish. Initially, we had expected to buy a package deal consisting of an automated warehouse and a software system to operate it, leaving “only” the integration with our own WMS to our own tech team. However, our experience while developing WMS was that it pays off to have very tailored user interactions, and to tinker with our systems on a weekly basis to find further improvements. Eventually we agreed with our automation partner that it would then make most sense if Picnic would build the control system for the automated site, with the automation partner only responsible for the point-to-point transport of crates on the conveyor belts.

This decision inherently gave our project some waterfall characteristics, which as a company self-identifying as “agile”, took some time to get to terms with. Whichever way you look at it, in the end, an automated fulfilment centre is a very big machine that implements a sort of production line. Almost every part of it is needed to make sure that you can fulfil a single order consisting of a single banana. Even our MVP, therefore, was a huge project. Moreover, the MVP could not be tested on the real hardware early on, because the software was being built at the same time as the machine it was supposed to control. The proof, therefore, would be in the pudding: the moment we would flip the switch on go-live day, it just had to work.

We therefore decided to borrow some of the good ideas from ye olde waterfall model, and created a 2 year plan to deliver an MVP. We made our requirements engineering and design processes more explicit to be sure we would cover the full scope and not miss anything essential when going live. We thought carefully about unit testing and component testing, and planned in on-site testing, and UAT testing. Simulating the physical site allowed us to work in an agile way during the delivery of the MVP, and the delivery of the MVP as a whole can be seen as a huge 2-year first sprint of the (weekly!) iterative improvement we started after go-live.

A modular monolith

Having signed up for such an enormous undertaking, we now faced another challenge: we’d never done anything like this in Picnic. So even though after two years of designing and contract negotiations we had a pretty good idea of what the finished site should be like, we could be pretty sure that somewhere in the next 1,5 years, we would make a mistake.

One way to mitigate the risks associated with our lack of experience was to think of a more rigorous QA process than we had been using so far, and to extend our testing capabilities to include a simulation of the site (blog series: 1, 2, 3). However, neither of those would protect us against making mistakes in our software architecture, and with pretty tight timeline set by the projected opening date in 2022, we could not afford to redesign the whole system halfway through.

WMS had been growing for the past couple of years, and we had already been toying with the idea of splitting it up into smaller services. So just adding more code to WMS did not seem like a good idea. However, going the whole hog by splitting up WMS and adding the functionality needed for controlling the automated site to a newly designed microservices architecture would also be a risky thing to do, as we had already identified a couple of years earlier in a different setting (and although the tech stack had been modernised since then, a lot of the arguments still held up). Such an approach would have two major downsides. For one, it would make the project even more enormous than it already was. But perhaps more importantly, without a thorough understanding of this new domain, we would likely make mistakes in designing the microservices, risking project delays due to endless refactoring.

The result: we decided to build a monolith, with the intention to split it up later.

This allowed us to get the best of both worlds. With only 12 tables at go-live, it resulted in a very compact data model, and the possibility to write powerful SQL queries across all of them. But also a code structure that already pencils in some boundaries (Java APIs) that could later be used to start “breaking off” microservices like the squares on a chocolate bar.

The architecture of our warehouse control system

Of course it is more complicated in real life. The shared database schema allows for all kinds of logic to be implemented that cannot be replicated in a true microservices setup, and after a couple of years of development, such logic is guaranteed to be there. But now, 5 years later, we are splitting up the monolith, and I can say confidently that starting with a monolith allowed us to deliver fast in the early years, and make better choices about how to design our services in recent years.

A four pizza team

Back to December 2020. Only a year earlier, we were bootstrapping the application with 1 frontend developer, 1 backend developer, and 1 QA. Now, the team that is working on the software for the automated site is now 14 strong. In Picnic, we had decided to follow the strategy of two-pizza teams: small, independent teams that support a specific part of the business. But this team did not look like a two-pizza team anymore.

Rapidly growing a team is a challenge in and of itself. Ideally, we would have started with the right size team on day one, learn about the site together, and start coding together. However, since all our development power was also needed for ongoing Picnic operations, we had to rely on new hires to grow our team, and hiring inflow is almost never as fast as you would like.

To move faster on short notice, we hired external developers from multiple consultancy agencies. In order to guarantee continuity and to keep the team aligned with the Picnic company culture, we limited the ratio of consultants to internal hires — effectively limiting how fast we could grow. We also organised information sessions about the project for the consultants who would potentially be placed in our team, so we could be relatively sure that there would be a real match; both on content as well as on cultural fit.

When we got to a team size of about 8, we were faced with a new challenge: this team was getting rather large for a single tech lead and product owner to manage. Like with our architecture, we chose a divide-and-conquer approach. At any time we would be developing around 3 new features, while designing the next 3. The tech lead and product owner could no longer specify up front everything that needed to happen, because a large part of the technical design needed quite a bit of research. Therefore, every feature that we wanted to start on next was assigned to a developer in the team, who would lead the design and be the go-to contact if you wanted to know anything about that feature.

By the time the team grew to 14, it already had not one but two tech leads, who were steering the architecture, reviewing designs, and together with the product owner, scoping functionality for features that were not in the design phase yet. The developers in the team were writing not just code, but also specifications and corresponding Jira tickets, which then were prioritised by the product owner, and quality-controlled by the tech leads. And like that, we could keep the train going until go-live, about a year later.

Looking back, I am happy that we spent so much effort on finding the right developers to join the team, even if that did mean losing quite some efficiency due to onboarding during the phase where the team was continuously growing. We found some great people with a real passion for the project, who really enabled us to run the team in the way we did. We did not split up the team until a couple of months after go-live, even though it was hardly a two-pizza team anymore. This allowed us to move fast by allocating developers flexibly to new features, but it was only possible because of a shared passion for the project, a strong team culture and the entrepreneurial mindset of the team members.

A royal beginning to the end of the project

So what happened after December 2020?

Of course, not everything was smooth sailing. We battled scope creep, had tough negotiations with business & operations about which features we had to slim down in order to make the deadline, and we had our share of integration problems that came out of on-site testing and testing against an emulator.

But despite the perhaps dispiriting outlook given by the steerco slides, we ended up fulfilling the first real orders from the new site on February 21st, 2022, and in March, operations were comfortable enough to even invite king Willem-Alexander to perform a pick at the official opening on April 1st (no joke!).

The king picking some orange juice during the official opening, under the watchful eye of Michiel Muller.

For our development team, the opening of the FC came with mixed feelings. One the one hand, it was the culmination of 2 years of hard work, but on the other hand, it meant that things had only just started. This was the time that the bugs that we didn’t find in testing would rear their ugly heads, and the time that it would become clear where our MVP needed urgent work to scale up operations. In other words: there was no time to relax!

The effort we put in paid off. About 3 years after go-live, the site was outperforming the business case it was originally designed for. One of our integration partners, who have been doing this stuff a lot longer than we do, recently confided to us that they had not believed we would pull that off on this timeline even if we were using their software, let alone if we’d write our own, with no prior experience. Nothing could make us prouder.


Picnic 10 years: 2020 — Sudo pick me a sandwich was originally published in Picnic Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

picnic blog job banner image

Want to join Sjoerd Cranen in finding solutions to interesting problems?