About the Author
Rory MacDonald is a founder and director of Made. Made is a team of software experts who are passionate about delivering well-crafted, mission-critical software. The team works work with organizations across many sectors who share their drive to produce standout, commercially succssful work.
“Does Spree Commerce scale?”
“How many products can Spree handle?”
“How many concurrent users can Spree support?”
“What is the most popular Spree Storefront?”
These are some of the questions we’re asked when discussing Spree Commerce with new customers. You’ll notice they’re all about ‘scalability’ and whether Spree can cope with high levels of traffic or large product catalogs. These questions are understandable, and the sort of things you should be asking about a relatively new platform.
I hope this article will provide some reassurance that Spree is more than capable of scaling to very large numbers. We’ll demonstrate the approach we take to scalability and what you should be doing to understand your system’s constraints and the points at which you’re going to need additional capacity.
When people talk about scalability, they tend to be referring to one of two things: a) throughput or b) Product Catalog size. In this post, we’re going to focus on throughput.
What is Throughput?
Throughput is the number of requests your application can serve in a given time period. The higher this value, the more scalable your application is.
Your application throughput is likely to vary between pages (as functionality and resource requirements will differ) and throughput will be constrained by the compute resources that you have available, such as your server type, server size or amount of machines within a load balanced cluster.
How Does it Affect Scalability?
The scalability of your application is directly linked to its throughput, as the more requests your application can serve, the more scalable your system is going to be and the fewer compute resources you’re going to need.
We’ve found the best way to get an accurate picture of your Spree scalability is to run volume tests. Volume tests are a technique we use to simulate large numbers of users accessing the store. They provide a realistic measure of how the store will perform under significant load.
To run volume tests, you need to setup a server environment which mimics the resource constraints of your production environment. These constraints will vary between hosting environments, so it’s very important that you benchmark on the exact same environment that you’ll be using for production, or the results will be of no use.
To begin the volume test, you will need to define a number of scenarios which mimic what your users would do on the site. They could be something like:
- Visit Homepage
- Visit Product Listings
- Add to Basket
- Add Coupon Code
- Enter Email Address
It’s important to bear in mind that different pages on the site will have different throughput, so only testing pages with high throughputs will provide inconclusive results.
You can use tools like NewRelic and Google Analytics to get an idea of the throughputs on your pages, and the user journeys customers take.
Once you have these defined, you should write them in a format that a volume testing tool can consume. We’ve used BlazeMeter and LoadImpact to volume test Spree in the past, but other tools are available.
Running the Volume Test
Next you need to run the test. You should define the number of Virtual Users (VUs) you want to concurrently access the storefront and the period of time you want the test to run for.
We tend to start with ~50 concurrent VUs for 5 minutes and increase from there. As you increase the number of concurrent users, you should be looking for your application performance to remain fairly consistent. If you see your response time decrease, this is a sign that optimizations need to be made.
In the benchmarks that we ran, we deployed the standard Spree 2.4 storefront within a load-balanced AWS environment, which had two Large Amazon EC2 instances running 14 Unicorn workers on each and was backed by a single large AWS RDS instance. This setup scaled out to approximately ~4800 requests per minute and to 30,000 orders per day.
To view this blog in its original format, visit the blog of Made.