Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Servers and Cloud Computing blog How we cut LLM inference costs by 35% migrating to Arm-Based AWS Graviton
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Artificial Intelligence (AI)
  • AWS Graviton3
  • DeveloperUseCase
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

How we cut LLM inference costs by 35% migrating to Arm-Based AWS Graviton

Cornelius Maroa
Cornelius Maroa
September 24, 2025
8 minute read time.

The moment our AI dreams hit financial reality

Vociply AI is a customer support platform startup founded by a 5-person team. They built their solution on Llama 2 7B to deliver high-quality automated customer support responses. The company was processing 50,000 conversations daily. This created a critical infrastructure cost challenge that nearly derailed their growth trajectory.

It was 3 AM when I got the Slack notification that would change everything. Our startup's AWS bill had just crossed $2,800 for the month, and we were only halfway through. Sarah, our CTO, messaged the team: "We need to talk. The chatbot is eating our runway."

Six months earlier, we had launched our AI-powered customer support platform. Our users loved it, the response quality was excellent, and we were processing 50,000 conversations daily. However, success came with a brutal reality check. Our x86-based inference infrastructure consumed 40% of our monthly budget.

For a 12-person startup, every dollar mattered. We had two choices. We could dramatically limit our AI features, or we could find a way to make them financially sustainable. Neither option felt acceptable after seeing how much our users relied on the intelligent responses our system provided.

That 3 AM wake-up call forced us to confront a hard truth. Brilliant AI technology means nothing if it bankrupts your company before you can scale.

Why LLM economics matters more than ever

The AI revolution has revealed a critical bottleneck. Inference costs can constrain promising startups before they achieve scale. Kruze Consulting serves over 175 AI startups. They found that AI companies spend about twice as much as traditional SaaS businesses spend on hosting and compute costs. Even more concerning, compute costs for AI companies grew from 24% of revenue to 50% over a single year. We were not alone in our struggle.

Large Language Models (LLM) require significant computational resources. A single Llama 2 7B inference can consume 14GB of memory and substantial CPU cycles. Multiply that by thousands of daily requests and costs spiral quickly. Traditional cloud pricing is optimized for x86 architectures. It often penalizes AI workloads with their memory-intensive patterns.

The broader implications extend beyond individual companies. As AI becomes ubiquitous, poor infrastructure choices add to massive energy consumption and carbon emissions. Training large models requires substantial electricity consumption. This is a significant operational consideration for infrastructure planning.

For startups like ours, the economics are even more challenging. Established tech giants receive massive cloud credits and volume discounts. We pay full retail prices while operating on limited budgets. Each inefficient architectural choice directly impacts our ability to innovate and compete.

The solution: Embracing Arm's price-performance

Daily active users across OpenAI and Hugging Face

After weeks of research and benchmarking, we decided to migrate. We moved our entire LLM infrastructure from x86 to Arm Neoverse-based AWS Graviton instances.

Our team had zero experience with the architecture. Still, the technical fundamentals were compelling. Graviton3 instances showed up to 40% better price-performance for many workloads, with significantly improved energy efficiency. For memory-intensive LLM inference, this advantage could be transformative.

Our migration strategy

The migration involved four critical components that we had to orchestrate carefully:

  1. Infrastructure overhaul: We rebuilt our entire deployment pipeline to support multi-architecture Docker containers. This enabled seamless deployment across both x86 and Arm64 environments. We also rethought our CI/CD processes and implemented architecture-aware build systems.
  2. Model optimization: Our Llama 2 7B models underwent comprehensive optimization for Arm's architectural strengths. We applied 4-bit quantization, reducing memory footprint from 14GB to 3.8GB. We also used Arm's NEON instruction set through specialized inference engines like llama.cpp.
  3. Kubernetes orchestration: We redesigned our container orchestration to intelligently schedule workloads on Arm-based nodes while maintaining our auto-scaling capabilities. This included implementing node affinity rules and resource optimization strategies tailored to Graviton3's characteristics.
  4. Monitoring and observability: We upgraded our observability stack to provide architecture-specific insights. This allowed us to track performance metrics, cost efficiency, and resource utilization across both x86 and Arm deployments.

Three technical factors drove our decision to migrate to Arm

  1. Cost efficiency: Graviton3 instances offered 20% lower hourly costs with the potential for significantly better performance per dollar. For our memory-intensive LLM workloads, this pricing advantage was amplified by Arm's superior memory bandwidth utilization.
  2. Energy efficiency: With 23% lower power consumption, Arm aligned with our sustainability goals while contributing to operational cost savings. This improvement was not just about being environmentally responsible. It translated directly to lower cooling costs and improved data center economics.
  3. Architecture evolution: Arm's presence in mobile computing continues to grow. Apple has transitioned to M-series chips. Increasing cloud provider support suggests that investing in Arm expertise provides long-term technical advantages. We wanted to build expertise in emerging architectures.

The technical transformation journey

Phase 1: Multi-architecture foundation

Our first challenge was rebuilding our deployment infrastructure to support both x86 and Arm64 architectures. This required rethinking everything from base Docker images to dependency management strategies.

We implemented Docker Buildx for multi-platform builds. This ensured our inference services could run identically across architectures. The complexity here was not just technical. It required coordinating build processes, testing pipelines, and deployment strategies across two very different processor architectures.

Phase 2: Arm-optimized model pipeline

The heart of our optimization focused on tailoring our Llama 2 7B deployment for Arm's strengths. We replaced our standard Hugging Face Transformers pipeline with llama.cpp. We compiled it with NEON optimizations to leverage Arm's SIMD capabilities.

Model quantization became crucial for maximizing Graviton3's memory bandwidth. By implementing 4-bit quantization using specialized libraries, we reduced memory usage by 72% while maintaining response quality. This dramatic reduction allowed us to serve more concurrent requests per instance.

Phase 3: Intelligent orchestration

We redesigned our Kubernetes deployment strategy for heterogeneous architecture support. We implemented node selectors and affinity rules. These intelligently scheduled Arm-optimized workloads on Graviton instances. During high-demand periods, they fall back to x86 nodes.

The auto-scaling configuration required careful tuning for Arm's performance characteristics. We discovered that Arm instances could handle higher memory utilization more efficiently. This led us to adjust our scaling triggers for optimal cost efficiency.

Phase 4: Architecture-aware monitoring

Our observability stack was enhanced with architecture-specific dashboards and alerting. We implemented custom metrics collection for tokens per second, memory bandwidth utilization, and cost per inference, all segmented by processor architecture.

This granular monitoring proved essential for ongoing optimization. We could identify performance regressions immediately and fine-tune resource allocation based on real production data.

Overcoming critical challenges

The library compatibility maze

Our biggest technical hurdle was Python library compatibility. Many ML libraries lacked precompiled Arm64 wheels, forcing us to build from source or find alternative implementations. This affected everything from PyTorch to specialized quantization libraries.

We solved this by creating a comprehensive compatibility matrix. It documented which libraries had native Arm support and which required custom builds. For critical dependencies without Arm support, we either built from source with appropriate compiler flags or found drop-in replacements optimized for Arm architectures.

Initial performance regression

Our first Arm deployment showed 25% slower inference than our optimized x86 setup. This result was demoralizing after weeks of migration work, but it showed us that simply porting code is not enough. True optimization requires understanding architectural differences.

We systematically addressed each performance bottleneck. We switched to Arm-optimized inference engines, applied aggressive quantization, tuning batch sizes for Arm's memory subsystem, and optimized thread utilization for Graviton3's core configuration.

Multi-architecture CI/CD complexity

Building and testing across architectures introduced significant pipeline complexity. Our build times initially doubled. We also faced occasional architecture-specific test failures that were difficult to debug.

We implemented a staged approach. Parallel builds for both architectures, architecture-specific test suites, and emulation-based testing for rapid iteration. This required investment in additional CI/CD infrastructure but it was essential for maintaining deployment confidence.

Team knowledge gap

Perhaps our biggest challenge was human rather than technical. Our engineering team had extensive x86 optimization experience but limited understanding of Arm architecture nuances. Simple assumptions about performance often did not apply.

We invested heavily in team education. This included dedicated learning sessions on Arm architecture, hands-on experimentation with development instances, and extensive documentation of Arm-specific optimization techniques. Our team used self-service resources such as the AWS Graviton Developer Center, Arm Developer Hub, and AWS Workshops for Graviton to accelerate learning.

We explored Amazon EC2 Graviton-based instances for real-world benchmarking. We also used AWS Graviton Ready Partner solutions to test production scenarios. This structured knowledge transfer was crucial for building in-house expertise and ensuring long-term success.

Impact: Beyond our wildest expectations

Eight months after migration, the results validated our decision and exceeded initial projections.

Financial transformation

Metric x86 (Before) x86 (After) Improvement
Hourly instance cost $0.34 $0.272 ↓ 20%
Monthly infrastructure $2,000 $1,300 ↓ 35%
Cost per 1,000 requests $1.33 $0.87 ↓ 34.5%
Annual savings - $8,400 -

The $700 monthly savings extended our runway by 4 months. It allowed us to invest in AI features. More importantly, it changed our unit economics, enabling more aggressive pricing strategies and higher customer acquisition rates.

Performance breakthrough

Metric x86 (Before) x86 (After) Improvement
Inference speed 24.3 tokens per second 28.1 tokens per second ↑ 15.6%
Average latency 1.2 seconds 1.1 seconds ↓ 8.3%
P95 latency 2.1 seconds 1.9 seconds ↓ 9.5%
Memory usage 11.2 GB 12.8 GB ↓ 12.5%
CPU utilization 72% 68% ↑ 5.9%

These performance improvements were not just technical victories. They translated directly to better user experiences and higher engagement rates.

Operational excellence

Zero downtime migration: Our blue-green deployment strategy ensured a seamless transition without service interruption. This helped maintain user trust during the architectural shift.

Improved monitoring: Arm-native monitoring tools offered better insights into system performance. They reduced alert fatigue by 40% and improving our mean time to resolution.

Energy efficiency: The 23% reduction in power consumption aligned with our ESG goals. The power reduction also contributed to cost savings through reduced cooling requirements.

Scalability confidence: Lower unit costs gave us confidence to scale AI features across our product suite. This enabled innovation that would have been too costly on x86.

Business impact that mattered

The technical improvements cascaded into meaningful business outcomes:

User satisfaction: While response quality remained identical, latency improvements increased user engagement by 12%. They also reduced abandonment rates during conversations.

Product development: Cost savings enabled us to launch three additional AI-powered features: sentiment analysis, conversation summarization, and multilingual support. We did this without increasing our infrastructure budget.

Investor confidence: Proving technical sophistication and cost discipline during our Series A fundraising process. Our improved unit economics directly contributed to a higher valuation.

Competitive positioning: Lower operational costs enabled more aggressive pricing. This helped us win enterprise deals against established competitors with higher infrastructure overhead.

About the author

Cornelius Maroa is an Arm Ambassador and AI Engineer at Vociply AI, specializing in cost-effective LLM deployment strategies.

Anonymous
Servers and Cloud Computing blog
  • How Fujitsu implemented confidential computing on FUJITSU-MONAKA with Arm CCA

    Marc Meunier
    Marc Meunier
    Discover how FUJITSU-MONAKA secures AI and HPC workloads with Arm v9 and Realm-based confidential computing.
    • October 13, 2025
  • Pre-silicon simulation and validation of OpenBMC + UEFI on Neoverse RD-V3

    odinlmshen
    odinlmshen
    In this blog post, learn how to integrate virtual BMC and firmware simulation into CI pipelines to speed bring-up, testing, and developer onboarding.
    • October 13, 2025
  • Accelerating early developer bring-up and pre-silicon validation with Arm Neoverse CSS V3

    odinlmshen
    odinlmshen
    Discover the Arm Neoverse RD-V3 Software Stack Learning Path—helping developers accelerate early bring-up and pre-silicon validation for complex firmware on Neoverse CSS V3.
    • October 13, 2025