It goes almost without saying that scaling an organisation by 10x is difficult. Maksim Grigorjev, a Senior Enterprise Architect at Adform, offered us an inside look at the growth journey of the tech team in the adtech industry.

He answered our questions ranging from working with massive datasets and managing microservices to dividing work between a 200 people strong tech team and maintaining a healthy technical community inside a large organisation.

 

🔵 How would you describe Adform as a product for the people outside the adtech industry?

Adform is a technology powering the open internet – a modern and effortless digital-advertisement toolkit enabling the publishers to effectively monetize their ad space and advertisers to reach the most relevant audience.

 

🔵 You have spent almost 12 years at the company. Do you remember why you chose to join in the first place?

From the early days of my professional career, I always enjoyed data-design and data-processing related tasks the most. With each subsequent position, I focused more and more on the data-related topics, accumulating practical experience as well as building a theoretical foundation from books and papers. Tools and approaches used in data-related tasks differ highly based on the size and amount of the data. 

Twelve years ago, there weren’t that many companies in Lithuania working with data warehouses of such a massive scale. After the first interview, I already knew that it was a perfect match for my technical area of interest and a great learning opportunity. Time has shown that data is not the only interesting technical challenge that Adform provides, but it was the one that lured me in.

 

 

🔵 During your time in Adform, the organisation has grown rapidly and continues to do so. How much has your business and team grown over the past 10-12 years? How has it felt inside the company?

Currently, I have the privilege to work with ten times more colleagues compared to when I joined the company. Our technical stack likely has grown even more during those years. As one of the first scrum masters in the company, I also saw the company-wide agile transformation first-hand, which was a major cultural shift and a great learning opportunity.

It has been a compelling experience to see the company reimagining itself and adjusting to both the organizational and technical challenges that come with rapid growth. Making mistakes is unavoidable, and not all approaches work out in the end, so you learn to value flexibility, experimentation and not being afraid of failing fast. Rapid growth also means that you need to learn how to onboard new teams and work effectively on the same product in parallel with a much greater capacity.

We also had to change our architecture to reflect the changes in the organisation. For example, moving away from monoliths towards microservices came naturally because, otherwise, dependency management and work parallelization would’ve become too painful and error-prone. We learned the hard way that microservices alone do not give sufficient productivity boost if not supported by the powerful internal platforms taking care of cross-cutting concerns in a unified way.

 

🔵 How large is your technical organisation at the moment? How have you divided the work between teams?

Dev & IT is around 200 people now, split into four development groups, an IT department and several supporting teams. Each development group is responsible for the specific horizontal layer of our product (high load, big data, business application and web) and supported by the dedicated solution architect. 

We made a conscious decision to organize our teams around a specific layer or a platform (instead of a product). For example, all our domain APIs are owned by the same development group and all client-facing user interfaces by another group. This approach allows our developers to specialize and excel in a reasonably small set of technologies and find common, consistent and highly reusable solutions to the same technical challenges across all of our products.

The IT department is responsible for our physical infrastructure, internal cloud platform, databases, security and multiple centralized DevOps services (monitoring, logging, deployment pipelines, etc.).

 

🔵 As a senior-level enterprise architect, you are constantly thinking about the big picture. What unique challenges does working at adtech offer to developers?

Adtech as an industry is quite challenging and very fast-paced. It’s still in its early days, very open to innovation and the product landscape changes every year. 

Market participants are constantly finding more and more effective ways to collaborate and provide the end-user with the most relevant, optimized and engaging advertisement. The majority of market participants are both integration partners and competitors. On the one hand, you need to be well-connected and maintain compliance with industry standards, but at the same time also distinguish yourself from the competition. It is also a highly regulated industry with an emphasis on privacy, which puts a lot of responsibility on our shoulders to keep the data safe, correct and protected against fraud.

All this allows developers to work on a portfolio of products not isolated from the outer world but highly integrated with clients, partners, exchanges, vendors and data providers. Industry standards also require constant evolution, and it is not uncommon for our lead engineers to take an active role in the industry working groups shaping them.

 

🔵 Could you give us a sense of the scale of datasets Adform’s products use? What challenges come with it?

We crossed the petabyte-scale threshold a while back and every day we process tens of billions of new transactions. This scale requires careful consideration in designing how the data is loaded, transported, processed, aggregated and queried. It also raises complexity in testing the pipelines, handling spikes, recovering from failures and ensuring the high availability and predictable latencies. 

At its core, we use technologies that are well-known in the industry and have a proven track record to perform well in big data environments (Kafka, Hadoop, Storm, Spark, Vertica, Aerospike, etc.) with components written in-house for data loading, transformation, aggregation and query generation. 

At such scale, it is not possible to dump all raw, unstructured data in one place and use it as a source for real-time, end-client reporting or augmenting the user interface with relevant KPIs. Therefore we invest a lot in cleansing, structuring and pre-processing the data to push as many calculations upfront as possible. 

We also carefully design effective aggregates which would give sufficient flexibility and predictable querying latencies to the end client, but at the same time preserve enough row-level data for offline analysis or asynchronous exports.

 

 

🔵 How many microservices do you have? What’s your approach to developing and maintaining them?

We have several thousand deployable units – the exact number changes as new units are developed, and old ones are put on the path for deprecation. 

Newer services are all based on the same tech stack, deployed as containers, and utilize central platform services, like monitoring, logging, alerting, deployment pipelines, etc. We aim to offload all repeatable or cross-cutting concerns into centralized platform offerings to make sure we solve the issues once and then apply them consistently. We also aim to minimize the boilerplate in new service development and ideally have developers focus explicitly on the business logic.

Our products have always been quite interconnected, so we struggled to find a common approach for data exchange between domain-private data stores and data synchronization between microservices. As a result, we invested in building an internal data distribution platform with all business domains exposing their public contracts by default without any change in the service itself. All consumers interested in a particular dataset can subscribe to the change streams exposed from those domains and build local read models of that data.

 

🔵 We have talked about data and back-end infrastructure quite a lot. However, for the end-user, a well-designed interface is also a must. How has Adform managed designing web applications?

User-experience and effective workflows are indeed very important for end-users as it directly impacts their productivity. Quite often, it is a deciding factor between our products and others.

Historically, we struggled with multiple segregated web applications which all looked somewhat similar but always had a slightly different look & feel and inconsistent feature sets. Three years ago, we realised this approach leads nowhere, so we decided to replace ~80 web applications with one new and modern application rewritten and redesigned from scratch. All user workflows are now based on a single framework, a single library of components and follow consistent functional patterns and design stereotypes. It was a highly rewarding project that received positive feedback from our users, gained industry recognition, and eventually won the prestigious Red Dot Design Award.

 

Check out Adform’s open positions on MeetFrank:

View all open positions.

 

🔵 And finally, we heard in a previous interview that Adform people share a special vibe. How would you describe the company culture inside the technical organization?

I think there are a few factors why we have an open, positive and friendly technical community:

  • Even though we have more than one product in our portfolio, all of those are interconnected and often offered as a package to the end client. As a result, all the teams feel that they contribute to the common goal, eliminating the internal competition.
  • Customer success and product organizations work closely with development to get input from the developers and also pass along feedback about new features. This is crucial for the technical organization to feel like a fundamental part of the company and maintain the feedback loop between your effort and the results.
  • Using a common stack of technologies allows our developers to contribute to projects, features or incidents outside their direct ownership. This grows developers’ overall domain knowledge, expands the social circles and also cross-pollinates ideas and best practices across the organisation.
  • We hold regular tech talks where the technical community shares the insights, challenges and lessons from the latest component developments.