Black Lives Matter.
We are committed to supporting the fight against racial injustice.
Black Lives Matter. We are committed to supporting the fight against racial injustice.
Learn More

The What

One of the challenges with the initial planning of DRMs architecture was that it was stuck between two separate phases of our platform’s evolution; technically, it tried to solve for modularity, and attempted to prime our data for re-use (as outlined in Part I).

Therefore, while comparable re-use and clean data were always the goal, a lot of the future business and product needs were not accounted for, leading to a rigid and coupled system that crossed domain boundaries between our report creation application (Webapp) and DRM

The question at this point was simple — what were we actually trying to fix?

DRM posed numerous challenges that we decided to solve for:

  1. As mentioned in Refactoring Part I, DRM was inflexible with the feature requests that product was asking for
  2. No one could actually tell you what DRMs data model was actually trying to achieve
  3. On the technical front, DRMs frontend was bundled as an NPM library which tied DRMs release to Webapp’s despite the two being entirely different applications
  4. From an organizational front, the Webapp monolith did not allow teams to function in silos, thereby increasing development time and made it common place to step on another team’s toes

Our task was cut out for us and we borrowed from Domain Driven Design techniques to get us moving.

The How

Untangling DRM proved to be a multi-stage process, ultimately allowing for a scalable domain model and a decouple architecture that allowed our team to function independently.

Codifying the Domain Model

Gathering requirements has always been a tricky task for any engineer that has to design a system that must extend itself in the near future. When product asked for features that were at odds with DRMs architecture, it was clear that it had not been built to suit future business needs, or rather, the product needed more features than DRM could reliably sustain

The first step was to understand the state of the world of sales comparables. We held “Event-Storming” sessions that outlined the the steps our users took as they worked with Sales Comparables, including different ways to add, import, edit, etc. Event storming has a few tenets that helped us design a more scalable architecture:

  1. It allowed engineers to sit with domain matter experts and map out the entire existing flow and understand how our very real users utilize the system
  2. It outlined what had previously been built and thereby exposed redundancies, convoluted flows, and ultimately highlighted where domain boundaries were being crossed

Tracking causes and flows after Event Storming

Via the Event Storming sessions we got some valuable insights:

  1. DRMs model had cross cutting concerns that coupled subdomains that had no business being coupled together
  2. A lot of our business process is based in Salesforce, therefore, key architectural interventions to allow an integration with Salesforce would allow us to more accurately map how data ultimately persists in our platform through major events in the lifecycle of a job.

In pure DDD terms, we distilled the user flow and came up with a “rich” Domain Model, as opposed to an anemic one, with which we could confidently deliver the features that product had requested, while also building a domain model that was very simple to understand and follow our user flow. An additional advantage with using entities and value objects the way DDD prescribes is that we were able to achieve close to 100% test coverage for critical portions of the application and over 85% overall.

Given that we were changing the DRMs foundation at its core, we decided to rename the service to “CompPlex”.

Decoupling the Monolith

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
— Conway’s Law

The monolith was proving difficult to work with also, in part, due to our quickly growing engineering team. Getting features through took a lot of time, and while DRM was an attempt to split the codebase into a separate product, it was too highly coupled and therefore only increased complexity in release and feature development.

As mentioned above, the NPM library that was being used to serve DRM back to Webapp led to sluggish development cycles and versioning issues in local development. Webapp’s dependency to NPMs library system was a major bottleneck to release value to users in a timely fashion. Given that our teams function full stack, we still wanted to maintain DRMs paradigm where we could plug in UI components whose development cycles we could control.

Our solution to this problem was to use a micro-frontend served as a web component to the monolith. This allowed us to:

  1. Have independent release cycles
  2. Develop CompPlex specific features locally in a sandbox separate from the monolith
  3. Clearly demarcate the boundary between Webapp and CompPlex as communication was only via events and not through shared components

A lot more went into determining more technical questions regarding using micro-frontends using Web Component architecture, but that’s an entirely separate blog post.

The next major step was to figure out how to deliver on product features that Webapp’s original position in the platform’s architecture did not allow us to confidently build.

In the past, when we had any integrations with third party services, we directly connected to the external services from the Webapp monolith. This was a problem as we got hard dependencies to Webapp that, by extension, “centered” itself in the platform. This implied that everything had to go through Webapp to access that integration, even if it existed outside the domain of a “report”.

An event based architecture allowed us to decouple dependencies from our core applications and accurately model our business processes. We introduced an event bus and leaned on Serverless technology to maintain Salesforce integrations with CompPlex and also Webapp.

A practical example of this is that earlier the concept of an “export” was the end of the lifecycle of a report in our tech stack. An “export” produces a Microsoft Word document which an appraiser then edits before passing it off to Bowery reviewers. These appraisal reports are then sent to clients. Understanding this process was significant on a macro level as it provided us with the insight that a comparable is only validated once it has been passed off by review. Initially, Webapp would force us to have comparable reviews done by appraiser personas. However, asking for user intervention to mark data as valid or clean was an expensive process change that did not guarantee user adoption into the flow as it was far outside the realms of a Webapp user’s flow.

Our solution was to use a status on Salesforce to shoot an event that announced to our platform that a “Job” (and its associated report) had been submitted to a client. We then leveraged AWS to update our data in CompPlex without needing to intervene in any Webapp flow, thereby, successfully separating domain boundaries and maintaining a source of truth that actually sat in alignment with our business processes.

Platform's Event-Based Architecture

How did we do?

All in all, the refactor took us a couple months which is what delivering the originally requested features in DRM would have taken us. We incorporated the development of those features in the CompPlex refactor which were deployed at the same time as the first release of CompPlex.

Since then we have incorporated multiple comparable types in CompPlex by using the original blueprint that we set up for Jobs and Sales Transactions. The DDD research significantly reduced the risk in future extension of the domain model with new comparable types as we front-loaded a lot of the domain work which was largely transferable in the future.

Part III elaborates on the results of this project.

Related

Sources & Citations