Title changed to reflect the article’s focus on the impact of the database on your test suite.
When Rails chose to tie its domain models directly to the underlying database, and built test automation support for it, it simplified life for many web developers. Unfortunately this tight coupling can cause a number of problems, not least of which is the fact that test suites tied to the underlying database can eventually start to drag. I’ve seen teams try to speed up their Rails test suites up every which way over the last few years, and while there are no absolute right or wrong answers here, I think these five techniques are, to one degree or another, viable choices. I’ve ordered them roughly from simple to complex, and have listed the risks and the benefits of each.
1. Use the database in model tests, stub the database elsewhere
Benefits: Simple, but not very helpful in the long run; your model tests alone will slow you down.
Risks: Very few. You should probably be doing this anyway, if you can.
How to do it: In your non-model code only, stub external calls to model objects to minimize database dependencies.
describe UsersController do describe "GET /users" do User.stub!(:find).and_return [ Factory.stub(:user) ] end end
2. Stub the object under test
Risks: Stubbing the object under test means that you’re no longer testing the object you think you’re testing. This is a bad idea. It also allows/encourages you to test implementation rather than behavior, which is also a bad idea, because it makes your tests brittle. I’ve used this approach, and don’t recommend it.
How to do it:
existing_user = User.new(:id => 1) User.stub!(:find).and_return existing_user existing_user.stub!(:potentially_expensive_operation) existing_user.method_that_uses_potentially_expensive_operation.should == "a bag of chips"
3. Stub database interactions at the driver level
Benefits: Gives you “true” unit tests. No need to risk stubbing the object under test.
Risks: It logically splits the model tests that need database access from the ones that don’t, even though the functionality they test may overlap. I’ve been on a project that did this with a large test suite and I didn’t much value the feedback those tests gave me. ActiveRecord is too tied to the database.
How to do it: See unit_record and more modern friends.
4. Use an in-memory database
Either configure your existing database to run in in-memory mode, or use a faster alternate in-memory database (for example, sqlite).
Benefits: Not as fast as stubbing interactions, but more robust and easy to experiment with. May only bring slight gains, however.
Risks: Slight behavioral differences between the underlying databases can ruin your tests, and your day. But if you’re not doing anything too implementation-specific then this can work well. You also may need to work a little bit to ensure that you have sufficient resources to run the whole suite in memory.
How to do it: Configure
database.yml to hit SQLite in your test environment, or configure your CI server’s mysql database to store its data on a memory-backed filesystem.
5. Parallelize your tests across multiple processors and machines
This is the most difficult thing to get right, but has potentially the greatest impact on your ability to scale out your test suite for truly quick feedback.
Benefits: Can (and should) be used in combination with some of the above techniques. Fast, no need to modify existing tests. Scales as you add hardware. Aids more than just database-specific tests. This is my preferred approach because it has the least impact on the test suite.
Risks: Potentially difficult to configure initially, test runs may occasionally fail for funky reasons (network I/O, e.g.).
How to do it: There are lots of good options here.