Over the past decade, the proliferation of software products has created a myriad of APIs, spreadsheets, and exports that every company relies on to get impactful information.
This growth is due to the flood of highly-targeted software as a service (SaaS) and other products that solve specific industry or business unit problems.
For example, manufacturing companies that also offer installation and maintenance used to be able to use a single ERP system. Now they use systems specific to manufacturing (ERP), sales (CRM) and services (time entry, project management, support ticketing, monitoring).
Given this dramatic shift, executing and maintaining successful software integrations has become a highly valuable skill. From our experience executing integrations, there are three key ways to improve the odds of success:
- Minimize dependency
- Make it maintainable
The easiest way to simplify a software integration is to determine the true business need, focusing only on the data that can add value and make an impact.
Narrow data by reducing the number of fields manipulated, the depth of the data requested, or the number of API calls or file import processes that need to be invoked.
True business requirements should be used to determine the best strategy or strategies for each scenario you face. A good integration weighs…
- How long data can lag — for example, yesterday, 8 hours ago, an hour ago, or 5 minutes ago
- What amount of data change takes place
- The tolerance for inaccurate history
- Limitations on how often the integration process can run
Using two of the most common integration methods as an example (API and file imports), there are three typical models: Kill and fill (truncate and insert), time window syncing, and real-time pulls
Kill and Fill
Kill and fill removes all existing data on the target side and replaces it from the source. The biggest advantage to this model is that it guarantees complete accuracy as of the pull.
However, this method is the most resource and time intensive, which is not usually a good fit for large datasets or datasets that need to be updated frequently.
Time Window Sync
A time window sync limits the recordset based on a time frame.
For example, a rolling time frame might always pull the last 7 days. If the source system has 10 years of data, it’s easy to see how that limitation would improve performance.
However, if there are changes in the source system data outside of the 7-day period, or if the process errors for 7 days, gaps and inaccuracies in the data will occur.
Real-time integrations are normally put in place during an application workflow process to pull (or push) a piece of data as a user is making progress.
This guarantees the most up-to-date data, but will normally require the highest frequency of communication between the source and target.
2) Minimize Dependency
There are a couple reasons why more dependencies create more opportunities for your software integration to fail.
- You don’t want too many moving parts, because any single one could cause the entire process to fall apart. This is especially troublesome when the dependency is outside your organization’s control.
- Imagine only one of your team members knows all the ins and outs of your integration; what happens if they leave?
To effectively minimize dependency, the most important thing to understand is what impact the integration has on the process as a whole. Far too often, entire workflow processes break down due to an integration for a secondary piece of data that doesn’t affect the outcome of the process.
Start by determining which elements are truly required to fulfill the need. Each of these elements needs a resource to own their status. This owner needs to understand the source of the information and have contact information for key personnel who can help troubleshoot and resolve issues in the event of a breakdown.
Situationally, a default or contingency value can be assigned in the event of an integration failure. These situations are more likely to be acceptable in secondary or non-required fields and could be values similar to N/A.
This would allow the workflow process to continue without the integration.
A final method to minimize dependency is to be sure that at least 2 people (the owner and a secondary) can jump in and answer questions or resolve issues with the integration.
The goal is to prevent any single point of failure. Imagine each member of the integration team winning the lottery—what knowledge would be the most painful for your team to relearn?
3) Make it Maintainable
Reliability and maintainability are especially critical with software integrations where dependencies outside of the organization’s control are the norm.
A near-infinite amount of time could be spent thinking through and handling edge cases in order to prevent issues. However, for the vast majority of companies, this is not a financially viable (or smart) way to approach reliability.
A more cost-effective and achievable goal: a solid setup of notifications and alerts.
By setting up a robust foundation for notifications and alerts, you can quickly react as issues occur. Early detection of failed API calls or imports will allow the integration team to start troubleshooting—oftentimes before the user is even able to report an issue.
A good rule for setting up notifications is to only notify users when action needs to be taken, preventing a flood of emails that simply becomes noise.
Over time, notifications can be added or removed as new notification types are discovered or as existing notifications are no longer fulfilling the “take action” rule.
Once a notification is received, the next step is quickly ascertaining where the issue exists—what stage of the workflow, what screen, what end user action, etc. This is achieved through a goldilocks level (not too much, not too little) of logging and breadcrumb trails.
As bugs are reported and fixed, logging should be added to handle those specific areas. To minimize dependency, more than one person needs to be able to respond to a notification and walk through this troubleshooting process.
Another common issue is treating an integration as if it will be the only integration, resulting in many separate pieces of code doing essentially the same thing but being maintained separately.
Any coding should be abstracted so additional integrations are added to the same codebase, leveraging the core of the integration code and guaranteeing consistency of programming languages, deployment models, logging, etc.
A good set of living and breathing documentation helps make software more maintainable. Document everything from installation to workflow and troubleshooting. If you’re not starting from scratch, simply record the documentation after each change or fix.
Your documentation will evolve over time without a significant burden upfront.
If it Ain’t Broke, Don’t Fix It
A final consideration: Leverage third-party maintained options as often as possible. For example, most accounting systems now offer integrations to payroll and banking applications.
In these cases, creating integrations from scratch is akin to reinventing the wheel.
Instead, find out if there is an out-of-the-box way to integrate your software without custom work. Taking this first step makes simplifying easier, reduces dependencies to the relationships that already exist, and limits investment in maintenance.