You can execute copies a large block of data to and from disk.
This will minimize disk caching.
Next is one example if our test system has 1GB of RAM.
It is easy ( 1GB = 250,000 blocks ).
$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=250000 && sync"
250000+0 records in
250000+0 records out
2048000000 bytes (2.0 GB) copied, 37.0113 s, 55.3 MB/s
real 0m40.910s
user 0m0.172s
sys 0m12.641s
It's very hard to argue with this dd test.