Vadim Tkachenko published interesting benchmark results with PCI-E based SSDs here. I recently got a chance to benchmark FusionIO’s 320 GB PCI-E drive. It was really impressive. My results, done on Windows with sqlio, are consistent (not identical, of course, but in the same ballpark) with what Vadim reported in that blog post, done with sysbench on Linux.
sqlio is a popular IO throughput testing tool from Microsoft. I didn’t get to test the throughput when the SSD is close to full. The key takeaways that I learned from my testing are:
1. I can confirm that there is no difference between random and sequential IO, contrary to the traditional spindle based hard disks;
2. Read is significantly faster than write. Reads and writes with 64 threads can achieve around 1.4 GB/S and 400 MB/S throughput, respectively.
It is good to hear that another vendor, Virident, also offers similar PCI-E based SSDs around similar price points. I think as the technology behind SSDs matures and price of it decreases, its impact on the database field could be significant.
Most, if not all, database platforms have put a lot of emphasis on achieving sequential reads and minimizing fragmentation. This is especially true for business intelligence databases. However, as SSDs gradually take over high end storage market for mission critical databases, with no difference between random and sequential access on those disks, it could relegate our obsession on sequential reads and fragmentation to the back burner, which is significant, in my opinion.
I am not suggesting that SSDs effectively make the fragmentation issue disappear, mind you. Because many database software, be it MySQL (perhaps the InnoDB storage engine in particular), SQL Server, or Oracle, uses prefetch, read-ahead read, or whatever the lingo maybe, to anticipate the need and bring additional data into cache in advance in case they are used next. This can be especially effective for business intelligence applications. In the case of prefectching, fragmentation is still undesirable, but the speed of SSDs could make it a less pressing issue than it has been, which is significant, I think.
A few years back, when multi-core servers started emerging, database software vendors invested time and energy to take advantage of that, with new features added and white paper created for NUMA architecture. I think it will be interesting to watch how emerging SSDs in the server market will be exploited. Percona’s white paper, Scaling MySQL Deployments With Percona Server and Virident tachIOn Drives, that talks “scaling up” with SSDs, instead of “scaling out” with sharding, is a pretty interesting idea.