Bulk vs Individual Compression
I’d like to share something very brief and very obvious – that compression works better with large amounts of data. That is, if you have to compress 100 sentences you’d better compress them in bulk rather than once sentence at a time. Let me illustrate that:
01 02 03 04 05 06 07 08 09 10 11 12 13 | public static void main(String[] args) throws Exception { List<String> sentences = new ArrayList<>(); for ( int i = 0 ; i < 100 ; i ++) { StringBuilder sentence = new StringBuilder(); for ( int j = 0 ; j < 100 ; j ++) { sentence.append(RandomStringUtils.randomAlphabetic( 10 )).append( " " ); } sentences.add(sentence.toString()); } byte [] compressed = compress(StringUtils.join(sentences, ". " )); System.out.println(compressed.length); System.out.println(sentences.stream().collect(Collectors.summingInt(sentence -> compress(sentence).length))); } |
The compress method is using commons-compress to easily generate results for multiple compression algorithms:
01 02 03 04 05 06 07 08 09 10 11 12 13 14 | public static byte [] compress(String str) { if (str == null || str.length() == 0 ) { return new byte [ 0 ]; } ByteArrayOutputStream out = new ByteArrayOutputStream(); try (CompressorOutputStream gzip = new CompressorStreamFactory() .createCompressorOutputStream(CompressorStreamFactory.GZIP, out)) { gzip.write(str.getBytes( "UTF-8" )); gzip.close(); return out.toByteArray(); } catch (Exception ex) { throw new RuntimeException(ex); } } |
The results are as follows, in bytes (note that there’s some randomness, so algorithms are not directly comparable):
Algorithm | Bulk | Individual |
---|---|---|
GZIP | 6590 | 10596 |
LZ4_FRAMED | 9214 | 10900 |
BZIP2 | 6663 | 12451 |
Why is that an obvious result? Because of the way most compression algorithms work – they look for patterns in the raw data and create a map of those patterns (a very rough description).
How is that useful? In big data scenarios where the underlying store supports per-record compression (e.g. a database or search engine), you may save a significant amount of disk space if you bundle multiple records into one stored/indexed record.
This is not a generically useful advice, though. You should check the particular datastore implementation. For example MS SQL Server supports both row and page compression. Cassandra does compression on an SSTable level, so it may not matter how you structure your rows. Certainly, if storing data in files, storing it in one file and compressing it is more efficient than compressing multiple files separately.
Disk space is cheap so playing with data bundling and compression may be seen as premature optimization. But in systems that operate on large datasets it’s a decision that can save you a lot of storage costs.
Published on Java Code Geeks with permission by Bozhidar Bozhanov, partner at our JCG program. See the original article here: Bulk vs Individual Compression Opinions expressed by Java Code Geeks contributors are their own. |
JUST AWESOME ARTICLE ~Thank you for share great content.