
Engineering teams at FinTech and B2B software companies regularly face a SOC 2 audit with a controls spreadsheet containing 80 or more line items written for a security team, not an engineering organization. Nothing in it clearly distinguishes which controls require engineering implementation, which belong to the compliance manager, and which an automated tool like Vanta or Drata handles. That ambiguity produces two failure modes: engineering builds controls the compliance tool would have automated, wasting sprint capacity on unnecessary work; or engineering skips controls only the team can implement, producing audit findings that require expensive remediation.
This guide draws the line clearly: what engineering must own, what can safely be delegated, and where both teams share responsibility.
What SOC 2 Audits Actually Check
SOC 2 is an audit standard developed by the AICPA that evaluates whether a service organization's controls satisfy the Trust Services Criteria (TSC). The five criteria are Security (required for all audits), Availability, Processing Integrity, Confidentiality, and Privacy. The optional criteria are selected based on the services the organization offers. Most software companies include Security and Availability at minimum.
The auditor's job is to verify that controls you say exist actually exist. In a Type 2 audit, the auditor also confirms that controls operated consistently over an observation period of typically six to twelve months. This distinction matters for engineering: Type 1 is a point-in-time snapshot. Type 2 requires sustained, evidence-generating processes built into how the team works every day. Most enterprise customers require Type 2.
What Engineering Teams Must Own
These are the controls that require engineering implementation. No compliance tool or policy document substitutes for them.
Access control architecture. CC6 (Logical and Physical Access Controls) requires demonstrating that access to systems and data is restricted to authorized individuals. Engineering owns the implementation: role-based access controls in applications and infrastructure, privilege separation between environments, service account management, and the mechanisms that generate the access logs auditors will review. A policy stating "we have access controls" without an engineering-implemented system behind it fails the audit.
Logging and monitoring pipelines. CC7 requires evidence of system monitoring. In practice, this means structured logs capturing authentication events, access to sensitive data, configuration changes, and system errors, shipped to a centralized location and retained for the full audit observation period. Engineering builds and maintains this. If the logs do not exist or are not retained, no compliance tool can create them retroactively.
CI/CD and change management controls. CC8 requires a formal change management process. For engineering teams, this means pull request requirements enforced at the repository level, automated test gates in the CI pipeline that log their enforcement, and deployment controls that record who deployed what and when. Controls need to be enforced by tooling, not just described in a policy document. If the policy states all changes require two reviewers but your version control history contains single-reviewer merges, the control has a documented deficiency regardless of what the policy says.
Encryption implementation. Data-at-rest and data-in-transit encryption requirements (CC9, C1) require engineering to implement TLS, database encryption, and key management correctly. Auditors will ask how keys are managed, rotated, and how access to them is controlled. The answers require engineering design decisions, not documentation.
Vulnerability management. CC7.1 requires a program to identify and remediate vulnerabilities. Engineering owns the integration of security scanning into the CI pipeline, the process for triaging and tracking CVEs, and evidence of remediation. This includes dependency scanning, container image scanning, and infrastructure configuration scanning (whichever applies to the stack).
What Engineering Can Safely Delegate
Policy documentation. The written security policies (acceptable use, access control policy, incident response policy) are owned by the compliance manager or security team. Engineering provides input on what the team actually does; the compliance team drafts and maintains the formal documentation.
Vendor risk management. CC9.2 requires demonstrating that third-party vendors are assessed for risk. This is a procurement and legal function. Engineering may need to provide a list of vendors and their access levels; the risk assessment, contract review, and vendor questionnaire process is not engineering's responsibility.
HR controls. Background checks, security awareness training, and employee onboarding and offboarding procedures are HR functions. Engineering participates (completing required training, documenting offboarding actions) but does not own the program or the evidence collection around it.
Audit coordination. Managing auditor relationships, scheduling evidence review sessions, and responding to information requests is the compliance manager's responsibility. Engineering provides specific evidence on request; the compliance team manages the audit process.
The Shared Zone: Controls That Require Both Teams
Incident response. Engineering must build the detection and response capability: alerting infrastructure, runbooks, and on-call processes. The compliance team must document the incident response plan and ensure it is tested. A Type 2 audit will ask for evidence of how incidents were handled during the observation period. Both the technical logs and the documented response record are required.
Data classification. The compliance team defines the classification scheme. Engineering implements it: data labels in the codebase, database field tagging, and access controls that correspond to classification levels. The policy without the implementation fails; the implementation without a coherent classification scheme is unauditable.
Business continuity and DR testing. The compliance team documents the recovery time objective (RTO) and recovery point objective (RPO). Engineering designs and tests the technical recovery capability. Type 2 audits require evidence of at least one DR test during the observation period: engineering runs it, the compliance team documents the outcome.
SOC 2 Type 1 vs. Type 2: What Changes for Engineering
Type 1 assesses whether controls exist at a point in time. Type 2 assesses whether they operated effectively over an observation period and requires dated evidence of consistent operation throughout that period.
Teams that pass a Type 1 audit and then fail Type 2 almost always fail not because a control broke, but because evidence of consistent operation is missing or incomplete. Access reviews skipped for a quarter, vulnerability scans that did not run for six weeks, deployment logs not retained from a decommissioned environment: all of these create observation gaps that auditors flag as control deficiencies.
This is where engineering analytics becomes directly relevant to compliance. Teams using platforms that track CI pipeline runs, change history, deployment frequency, and incident timelines generate audit-ready evidence as a by-product of how they work, rather than having to reconstruct it retrospectively. For how delivery analytics fits the broader FinTech compliance picture, see the FinTech Engineering Playbook. For FinTech M&A contexts, the engineering due diligence checklist shows how SOC 2 evidence is assessed in acquisition review.
What Compliance Automation Tools Handle (and What They Do Not)
Compliance automation platforms like Vanta, Drata, and Secureframe automate evidence collection from cloud providers (AWS, GCP, Azure), version control systems, HR tools, and common SaaS applications. They map that evidence to SOC 2 controls, track gaps, and manage the audit workflow. For organizations that have implemented controls correctly, they substantially reduce the manual work of an audit.
What they do not do is implement controls. If access controls are not built, the tool has no access log to collect. If the CI pipeline does not enforce change requirements, there is no enforcement evidence to surface. If logging pipelines are not configured, evidence collection has nothing to pull from. The tool is the evidence layer, not the control layer. Engineering owns the control layer regardless of which compliance platform is in use.
Frequently Asked Questions
What is SOC 2 compliance?
SOC 2 compliance means a service organization has implemented controls satisfying the AICPA's Trust Services Criteria and had those controls verified by an independent auditor. Type 1 confirms controls exist at a point in time. Type 2 confirms controls operated consistently over an observation period of six to twelve months. This is the standard most enterprise customers require from software vendors.
What does an engineering team own in a SOC 2 audit?
Engineering owns the technical controls: access control architecture (CC6), logging and monitoring pipelines (CC7), CI/CD change management enforcement (CC8), encryption implementation (CC9, C1), and vulnerability management. These cannot be substituted by policy documentation or compliance tooling. They require engineering implementation and generate the evidence that auditors review.
What is the difference between SOC 2 Type 1 and Type 2?
Type 1 is a point-in-time assessment confirming controls exist. Type 2 assesses whether controls operated consistently over a defined observation period (typically six to twelve months) and requires dated evidence throughout. Type 2 is harder to sustain because it exposes gaps in continuity, not just gaps in control design.
Can a tool like Vanta or Drata handle the engineering work?
No. Compliance automation tools automate evidence collection from systems that already have controls implemented. They do not implement controls. If your logging pipeline, access control system, or CI/CD enforcement is not built, the tool has nothing to collect evidence from. Engineering builds the controls; the compliance tool collects the evidence.
How does SOC 2 fit into a FinTech team's compliance obligations?
SOC 2 is one layer of the FinTech compliance stack, typically alongside PCI-DSS (payment data), ISO 27001 (information security management), and regulatory requirements like FCA operational resilience rules. For FinTech teams with AI systems in production, the EU AI Act and DORA governance requirements add logging and explainability controls that align directly with SOC 2 CC7 and CC8. The controls overlap: a well-built access control architecture and logging pipeline serves multiple frameworks simultaneously. For the broader compliance context, see the FinTech Engineering Playbook.
What causes SOC 2 Type 2 audits to fail?
The most common Type 2 failure is not a broken control. It is a gap in evidence of consistent operation. Access reviews skipped for a quarter, vulnerability scans that did not run for several weeks, or deployment logs not retained from a decommissioned environment all create observation gaps. The control design may be correct, but the sustained operation was not documented throughout the audit period.
If your team is preparing for a SOC 2 Type 2 audit, Scrums.com connects to your CI/CD system, version control, and deployment tooling to surface the change history, access analytics, and delivery metrics that Type 2 audits require, generated continuously from the tools your team already uses.
For hands-on support building compliant engineering infrastructure, start a conversation with our team.











