Many businesses struggle with challenges such as slow pipeline execution, rising cloud infrastructure costs, and difficulties in scaling efficiently to meet growing demands.
The Customer
A mid-sized fintech company. The customer operated in the cryptocurrency space, offering financial services through their own trading platform. Their primary focus was facilitating the trading of digital assets directly within their ecosystem.
Their Infrastructure
The customer maintained separate AWS accounts for staging and production environments. The application was actively running in both, serving distinct purposes: staging was used for testing and validation, while production supported live usage by end customers.
The customer ran their infrastructure on AWS using ECS Fargate for container orchestration. For data storage, they relied on a combination of services including Amazon RDS, Amazon DocumentDB (MongoDB-compatible), and Amazon ElastiCache for Redis.
They had multiple services running on AWS ECS(Fargate) and they used AWS copilot to build their infrastructure.
They were using AWS Codepipeline, Codebuild and Codedeploy to build and deploy their applications.
The Challenge
The immediate challenge was to resolve failing CI/CD pipelines to restore the ability for developers to deploy to production. Following that, a comprehensive analysis of the cloud infrastructure was planned to identify areas for improvement and implement cost optimization strategies.
The pipeline issue has been resolved. It was particularly challenging because AWS Copilot generated large CloudFormation stacks, making them difficult to analyze. After thorough investigation, we discovered that the Fargate task didn’t have enough CPU to bootstrap the application. Once we updated the task definitions and increased the CPU allocation, the immediate issue was resolved.
Customer has other challenges, that he wanted us to address
- Growing Infrastructure Costs
- Slow pipelines execution time (usually 1.5 hours for one deployment)
- And also analyse existing architecture and if possible improve it.
Analysis of pipelines and Cloud Infrastructure
To perform full analysis, we need to take all components in infrastructure and understand what can be improved. There was also a requirement to use EC2 instead of Fargate. So, we analysed existing components such as:
- Networking and VPC configuration
- CI/CD pipelines
- Data Storages
- Migrate from ECS Fargate to ECS
- Cloudfront
- AWS CoPilot
- Cost Optmiziation
Let’s analyse what we have found after doing the analysis.
The Existing Architecture
To effectively address the customer’s challenges, we first needed to analyze the existing architecture in order to propose improvements. We’ve begun this analysis, and the diagram below illustrates what was their existing architecture and what recommendations we made.

According to the architecture, the customer was using multiple VPCs that communicated with each other. We couldn’t identify a solid reason for creating a separate VPC for each service, and the customer also didn’t have a clear justification for this approach. Having multiple VPCs increases AWS costs due to the need for separate components like load balancers, NAT gateways, and other networking resources that contribute to higher expenses.
To reduce costs and simplify the architecture, we suggested creating a new VPC and consolidating all resources within it. Additionally, instead of using ECS Fargate to run services, we recommended switching to ECS on EC2, which provides more control and can be more cost-effective in certain scenarios, particularly for their case.
As of March 21, 2025, AWS Copilot did not support provisioning ECS on EC2. Therefore, the new architecture was implemented using Terraform to ensure flexibility and infrastructure-as-code best practices.
The improved solution
The newly proposed architecture involves using a single VPC to host all services, deployed using ECS on EC2 instances. A single load balancer, placed in a public subnet, will handle incoming traffic and route it to the appropriate services based on defined conditions.
Services deployed inside private subnet, can call each other directly using internal private load balancer.
This approach helps eliminate the costs associated with multiple load balancers, NAT gateways, and inter-VPC network traffic by consolidating and deploying all services within a single VPC. Additionally, the customer can deploy new services within this VPC, ensuring easier management, better scalability, and more efficient use of resources.
The following diagram depicts the proposed architecture, highlighting the simplified network design and service deployment within a single VPC.

For services, EC2-ECS will be used instead of ECS-Fargate.
The migration was completed with minimal interruption. All services — including databases, Redis (ElastiCache), and DynamoDB — were successfully migrated into a single VPC with no data loss.
Auto scaling was implemented for both the ECS cluster and individual services, ensuring better resource utilization and improved application availability under varying workloads.
Pipelines
The customer was initially using AWS CodePipeline, along with CodeBuild and CodeDeploy. After adopting GitHub Enterprise, which offers 50,000 free GitHub Actions minutes, they expressed interest in offloading the associated costs of CodePipeline by leveraging GitHub Actions instead.
We migrated the CI/CD process to GitHub Actions, optimized the container image build process, and improved caching mechanisms — resulting in faster pipeline execution and reduced operational costs.
Deployment speed improved significantly, dropping from approximately 1.5 hours to just 17.5 minutes on average
Results and Benefits
After improving the architecture and CI/CD processes, the following achievements were made:
- 35% reduced infrastructure cost
- 5x faster running pipelines
- Auto Scaling on traffic spikes