assert_select_rjs reloaded!

If you ever dared to unit-test a Rails RJS action, for example something like this:
def my_ajax_action
   ...
   render(:update) do |page|
     page.replace_html 'shoppinglist', :partial => 'cart'
     page.replace_html 'items', :partial => 'layouts/items', :locals => { :cart => @cart }
   end
end
you may already know and use the assert_select_rjs testing helper, which basically will verify the structure of your RJS response.

This testing method may really help you shortening the TDD feedback loop in an AJAX-based Rails webapp, and then you’ll may even be confident enough and save one or two brittle Selenium tests.

The only problem with assert_select_rjs is that is (IMHO) poorly documented and rarely googled about.
So, this is my turn to give back what we discovered.

If you have a Rails webapp using jQuery as javascript framework, you may have a hard time using assert_select_rjs correctly, and this is why:

for jQuery, this is the correct way to use assert_select_rjs:
assert_select_rjs :replace_html, '#shoppinglist'
it’s important the ‘#’ prefix here to refer to DOM element IDs, since the notation without ‘#’ will work only if your app uses Prototype.
Another nice thing to know is the way to make assertion on the selection matched by the assert_select_rjs.
For example, this code
assert_select_rjs :replace_html, '#shoppinglist' do
    assert_select '#shipping_cost_description', /Shipping costs for France/
    assert_select '#shipping_cost_value', /€ 12,30/
end
will verify that the section replaced inside the ‘shoppinglist’ element will match the two followings assetions.

My first test using webdriver (aka Selenium 2.0)!

As many say, a good solution to selenese flu is Webdriver (see more at http://code.google.com/p/selenium).

Webdriver has been accepted by the Selenium guys as the new approach to web application testing, opposed to the classical “selenium 1.0″ approach, based on a javascript driver, which suffers from way too many issues.
Unfortunately, Selenium 2.0, which plan to fully support Webdriver, is still on an alpha release, and actually is very difficult to find ruby-based web testing tools supporting this alpha version of selenium 2.0.
One of those tools is actually Watir (though Webrat too is planning to support Selenium 2.0 sooner or later), and more precisely this project is quite stable to allow a first test drive.

So this is what I did:

First: installed required gems

  sudo gem install selenium-webdriver
  sudo gem install watir-webdriver --pre

Second: configure my Rails testing configuration to use watir

config/environments/test.rb
  ...
  config.gem "watir-webdriver"
  ...
test/test_helper.rb
  require 'test_help'
  ...
  require 'watir-webdriver'
  ...

Third: write a test

test/integration/paypal_integration_test.rb
require 'test_helper'

class PaypalIntegrationTest < ActionController::IntegrationTest
  include LocaleHelper
  self.use_transactional_fixtures = false

  def setup
    ... some setup stuff here ...   
    @browser = Watir::Browser.new(:firefox)
  end

  def teardown
    @browser.close
  end

  test "something interesting" do
    @browser.goto "https://developer.paypal.com/"
    @browser.text_field(:name, "login_email").set "my_test_account@sourcesense.com"
    @browser.text_field(:name, "login_password").set "mysecret"
    @browser.button(:name, "submit").click

    @browser.goto "https://localhost"

    @browser.link(:id, 'loginlink').click
    @browser.text_field(:name, "email").set @user.email
    @browser.text_field(:name, "password").set @user.password
    @browser.button(:text, "Login").click

    # add_a_product_to_cart
    product = Factory(:product, :code => "a code", :categories => [@juve_store])
    Factory(:product_variant, :code => "03", :availability => 99, :product => product)
    @browser.goto "https://localhost/frontend/products/show/#{product.id}"
    @browser.button(:id, "add_to_cart").click

    @browser.link(:text, "Checkout").click
    @browser.link(:id, "gotobuy").click

    # choose "Paypal"
    @browser.radios.last.set

    @browser.link(:id, "gotobuy").click

    sleep 5
    assert @browser.text.include?("Payment for order #{last_order_number()}")

    @browser.text_field(:name, "login_email").set "my_test_buyer@sourcesense.com"
    @browser.text_field(:name, "login_password").set "yetanothersecrethere"
    @browser.button(:text, "Accedi").click
    @browser.button(:text, "Paga ora").click

    sleep 5
    assert @browser.text.include?("Il pagamento è stato inviato")

    @browser.button(:id, "merchantReturn").click
    assert_contain_waiting("Your purchase")
    assert_contain_waiting(last_order_number())

  end

private

  def last_order_number
    Order.last ? Order.last.number : ""
  end

end

Some comments here:

  • This is a spike, so please don’t say this test is too long and not well refactored
  • I had to put two sleep calls in two places (I gotta say that this specific test, involving paypal sandbox, is really slow due to the slowness in the paypal response time).
  • Anyway, this alpha version of webdriver is still lacking: I cannot say wheather this is a problem I’ll have even with future (possibly more stable) version of Webdriver.

Some references:

One (and a half) useful thing to know when using DeepTest gem with MySQL

DeepTest currently won’t work if you’ve configured MySQL with no password (in other words, if you are able to connect to mysql with a simple “mysql -u root”).
To fix this, you have to patch DeepTest (I know, asap I’ll go through the whole process to propose the patch to the original project leader).
Actually, you have to comment out a line, in the DeepTest:Database:MysqlSetupListener#grant_privileges method:

...
def grant_privileges(connection)
sql = %{grant all on #{worker_database}.*
to %s@'localhost';} % [
connection.quote(worker_database_config[:username])# ,
# connection.quote(worker_database_config[:password])  <-- mysql with no password won't work
]
connection.execute sql
end
...

Another tip (the “half” in the blog post title):
Don’t forget to edit the “pattern” option in your DeepTest rake task, to be able to grab all the testcases you want.
In my case, I want to skip a whole folder containing selenium tests, so I have to write my Deep Test rake file this way:
(in /lib/tasks/test.rake)

require "deep_test/rake_tasks"
...

DeepTest::TestTask.new "deep" do |t|
t.number_of_workers = 2
t.pattern = "test/{unit,functional,integration}/**/*_test.rb"
t.libs << "test"
t.worker_listener = "DeepTest::Database::MysqlSetupListener"
end

Ruby: how to spot slow tests in your test suite

This is actually my first post in english and also my first post on Ruby/Rails stuff. Twice as hard!

Anyway, we’re working on a Rails project, and we’re experiencing the classical debate in all Rails project (at least the ones with tests!): why our test suite is so damn slow?!
Ok, we know that ActiveRecord is one of the key components in Rails and is at the root of its philosophy of web development. And along with ActiveRecord comes the strong tight between the model and the database. So each test, even the unit tests, will touch the database (ok, technically speaking they cannot be defined unit-tests, I know. Sorry Michael Feathers for betraying your definition).
The very first consequence of this approach is that as your test suite grows with your project, it will become slower and slower.

Let’s take our current project. This is our actual test suite composition:

  • Unit: 317 tests, 803 assertions
  • Functional: 245 tests, 686 assertions
  • Integration: 50 tests, 218 assertions

So we have 612 test methods, for a resulting number of 1707 assertions.
As a side note, our code-to-test ratio is 1:2.3, that is, for each line of production code we have 2.3 lines of tests.
The suite takes about 115 seconds to execute (on my MacBook Pro Core 2 Duo).

So, what can we do to speed up our tests and have a more “feedback-friendly” test suite?
The first step toward the solution of this issue is to have some metrics to reflect on, and so I developed this little ruby module to collect test duration times.
This is how you can use it too:

First, create a file called “test_time_tracking.rb” in the test folder of your Rails project. This should be its content:

module TestTimeTracking
    class ActiveSupport::TestCase
      def self.should_track_timing?
        not(ENV["tracking"].nil?)
      end

      setup :mark_test_start_time if should_track_timing?
      teardown :record_test_duration if should_track_timing?

      def mark_test_start_time
        @start_time = Time.now
      end

      def record_test_duration
        File.open("/tmp/test_metrics.csv", "a") do |file|
          file.puts "#{name().gsub(/,/, '_')},#{Time.now - @start_time}"
        end
      end

    end
end

Then, edit your “test_helper.rb” (again, under the test folder), to require and include the previous module.
E.g.

*test_helper.rb*

ENV["RAILS_ENV"] = "test"
  require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
  require "test_time_tracking"

  class ActiveSupport::TestCase
    include TestTimeTracking
    ...

then, all you have to do is executing your rake task with the “tracking” option set, e.g.
tracking=on rake

At the end of the test suite execution you’ll find a CSV file (test_metrics.csv) in your /tmp folder.
This file contains a line for each test method executed, along with its duration in seconds.
I use to upload this file in google docs, and then apply a formula to sort out the methods from the slowest to the fastest.
A good formula is the following:
=Sort(A2:B612, B2:B612, FALSE)

The main limitation in the current implementation of this module is that every time the suite is executed with rake, the new time metrics collected are appended at the end of the previous file (if it exists), so each time you should remember to move the file to a different location. I’m working on this issue, so I’m expecting to find a better solution. Stay tuned!

Michael Feathers on TDD

Una breve riflessione di Michael Feathers sul TDD, passando per i mock objects per finire sulla necessita’ di adottare pratiche che “costringano” a ragionare e riflettere sul nostro codice.

Interessante anche l’excursus sulla storia dei mock objects, nati in Connextra

The story I heard was that it was all started by John Nolan, the CTO of a startup named Connextra. John Nolan, gave his developers a challenge: write OO code with no getters.  Whenever possible, tell another object to do something rather than ask.  In the process of doing this, they noticed that their code became supple and easy to change.

La frase chiave:

We need practices which help us achieve continuous discipline and a continuous state of reflection.  Clean Room and TDD are two practices which, despite their radical differences, force us to think with absolute precision about what we are doing.

Michael Feathers on testing private methods

Da un articolo di InfoQ, la posizione di M.Feathers sul testare i metodi privati:

Michael Feathers suggested last year in The Deep Synergy Between Testability and Good Design that TDD encourages good design and, conversely, code that is not testable should make us think twice:

When I write tests and I have the urge to test a private method, I take it as a hint. The hint tells me that my class is encapsulating so much that it has ceased to be “understandable” by tests through its public interface. I listen to the hint, and factor my design differently. Usually, I end up moving the private method (and possibly some methods around it) to a new class where it can be non-private and accessible to tests.

Condivido al 100%!

E interessante anche quello che dice dopo, nel post originale, riguardo alla relazione tra coupling, cohesion e testabilita’.

In the end, it all comes down to cohesion and coupling.  If classes are deeply coupled with their neighbors, it is hard to control them in a test or observe them independently.  If a class isn’t cohesive, it may have some logic which is not easily exercisable through its public interface.

It seems that reverse is true also.  Classes which are hard to instantiate and use in a test harness are more coupled than they could be, and classes with private methods that you feel the urge to test, invariably have some sort of cohesion problem: they have more than one responsibility.

Ascoltare i propri test: quando la lunghezza di un costruttore ci puo’ insegnare tanto…

Ancora una volta i post su mockobjects.com mi ricordano che c’e’ molto da imparare dai propri test. E’ il caso di Test Smell: Bloated Constructor e Test Smell: Everything is mocked.

Bloated Constructor…

Se si usa il TDD, capita di finire per avere oggetti con un costruttore gigante, che prende una lista infinita di parametri, tipicamente i peers (ovvero i collaboratori) dell’oggetto. In tali casi fare i test, e soprattutto farli coi mock, e’ la morte. E in molti casi ce la si prende coi mock, rei di complicare i test.

Ma spesso la difficolta’ a testare un oggetto e’ sintomo di problemi di design sull’oggetto stesso… E riflettere su queste difficolta’ e riportarle sul codice sotto test ci consente di migliorarne il design. O, come dice Steve Freeman, being sensitive to complexity in the tests can help me clarify my designs.

In questo caso un costruttore lungo potrebbe indicare che magari ci sono oggetti che potrebbero essere raggruppati a formare un nuovo oggetto. Questo semplifica anche i test dell’oggetto da cui siamo partiti e consente di ridurre le sue responsabilita’, tutte cose buone e giuste.

Freeman usa due euristiche per l’estrazione di componenti:

When I’m extracting implicit components, I look first for two conditions: arguments that are always used together in the class, and that have the same lifetime. That usually finds me the concept, then I have the harder task of finding a good name.

Nel secondo post che cito all’inizio, Test Smell: Everything is mocked, si sfata un mito duro a morire: non si devono fare mock per tutte le classi che si incontrano.

In particolare non e’ assolutamente necessario mockizzare value objects, e soprattutto “don’t mock third-party libraries!”. Piuttosto, un approccio migliore e’ quello di far emergere col TDD e i mock un sottile straterello di codice che faccia da adaptor verso le API esterne, e che fornisca al nostro dominio i servizi di cui ha bisogno.

E poi testare questo thin layer con dei test di integrazione che garantiscano che ci si agganci correttamente alla libreria.

Ascoltare i propri test, ovvero migliorare il codice partendo dagli smell dei propri test

Steve Freeman e Nat Pryce hanno iniziato una serie di interessanti post sul loro blog mockobjects sul tema dei Test Smells, ovvero su come ‘ascoltare’ i propri test per scoprire possibilita’ di miglioramento nel design del codice sotto test

In our experience, when we find that our tests are awkward to write, it’s usually because the design of our target code can be improved.

Una cosa su cui mi trovo d’accordo, piu’ che altro perche’ ho avuto la stessa esperienza in passato (e anche nel presente!)

Test Smells…

Il primo post della serie e’ Test Smell: I need to mock an object I can’t replace (without magic), dove si parte da un codice piuttosto ‘chiuso’, con dipendenze nascoste (magari usando dei Singleton), e passo passo lo si rifattorizza, aprendolo, prima introducendo un fake (o stub che dir si voglia) e poi passando ai mock, mostrando come il codice migliori, prima esplicitando le dipendenze e poi assegnando meglio le responsabilita’.

Se vi capita dategli un’occhiata!

Commenti al post “Don’t Over-Use Mock Objects” di K.Ray

Tempo fa Giuliano aveva segnalato questo post di K.Ray: Don’t Over-Use Mock Objects

Mi sembra che le tesi dell’articolo siano un po’ forti (“Avoid them. Mocks make your tests more fragile and more tightly-coupled. And at the same time, they reduce the integration-test aspects of TDD. They make your tests larger and more complicated and less readable.”). Certo, se usati male possono dare problemi di fragilita’ eccessiva del test e di leggibilita’ (a meno che non si usi un po’ di literate programming con jMock), e soprattutto se non si usano per riflettere su eventuali debolezze del proprio codice (a questo riguardo il blog su www.mockobjects.com fa spesso illuminanti esempi di codice che usa scorrettamente i mock).

L’obiezione che i mock ridurrebbero la robustezza dei test e’ debole (“If you change an object’s API, a mock object’s API may not change, making your tests out-of-date with respect to your non-test code”), perche’ con EasyMock si usano le API ‘vere’, pertanto non esite la possibilita’ di avere dei metodi ‘out of date’ (non so come funzioni invece jMock).

L’unica osservazione secondo me con un certo fondamento e’ la critica al modo di procedere degli interaction-based testers (“In TDD, you normally write a test, write some code in a target class/method that passes the test. The third step is refactoring. … Test-Driving with Mocks inverts that order: you create your target class and a mock class up-front, and plan on how they interact, instead of evolving that interaction in TDD’s refactoring steps. You pre-judge the class design rather than evolve it.”). Anche a me suona innaturale di norma procedere come gli interaction-based testers, ma penso anche che ci siano casi in cui questa ‘esplorazione’ delle interazioni e’ utile.
Suonera’ un po’ eccessivo, ma qualcuno diceva che la programmazione dei mock in un test e’ un po’ come un sequence diagram sotto forma di codice! :-)

Siamo invece d’accordo che i mock sono utili per rendere piu’ indipendenti i test (“Mocks are tool for decoupling, and I do sometimes use them. I limit my use of Mocks to those situations where the real object will not reliably do what I want. Examples: simulating errors in writing a file; simulating connections with a remote server; simulating errors from remote server.”).