DRAFT CBF DEFINITION


This document and the CBF definitions are still subject to change.

This is an updated version of the CBF draft proposal, following the Brookhaven imgCIF workshop. (This proposes a detailed syntax for the multiple pseudo-ASCII header sections and binary sections structure as discussed at the Brookhaven workshop.

Note: The present CBFlib is experimenting with a "binary string" data value type, instead of the separated header section / binary section concept described here. Some comments have been added as to possible changes. This approach will be evaluated, and maybe replaced the previous idea. This should be regarded as experimental.)

Here's my attempt to define a CBF format, please excuse any inconsistencies and incompleteness.

I separate the definition from comments on discussion items by using round brackets to refer to notes kept separate from the main text e.g. (1) refers to point 1 in the notes section. I suggest that I try to up-date and redistribute this definition as new suggestions are sent or consensus is reached.


The Crystallographic Binary File Format DRAFT PROPOSAL

ABSTRACT

This document describes the Crystallographic Binary File (CBF) format; a simple self-describing binary format for efficient transport and archiving of experimental data for the crystallographic community. The format consists of a "CIF-compatible" header section contained within a binary file. The format of the binary file, and the new CIF data- items are defined.

NOTE:

  • All numbers are decimal unless otherwise stated.

  • The term byte refers to a group of eight bits.

1.0 INTRODUCTION

The Crystallographic Binary File (CBF) format is a complementary format to the Crystallographic Information File (CIF) [1], supporting efficient storage of large quantities of experimental data in a self-describing binary format (1).

The initial aim is to support efficient storage of raw experimental data from area-detectors (images) with no loss of information compared to existing formats. The format should be both efficient in terms of writing and reading speeds, and in terms of stored file sizes, and should be simple enough to be easily coded, or ported to new computer systems.

Flexibility and extensibility are required, and later the storage of other forms of data may be added without affecting the present definitions.

The aims are achieved by a simple binary file format, consisting of a variable length header section followed by binary data sections. The binary data is fully described by CIF data name / value pairs within the header sections. The header sections may also contain other auxiliary information.

The present version of the format only tries to deal with simple Cartesian data. This is essentially the "raw" data from detectors that is typically stored in commercial formats or individual formats internal to particular institutes, but could be other forms of data. It is hoped that CBF can replace individual laboratory or institute formats for "home" built detector systems, be used as a inter-program data exchange format, and may be offered as an output choice by a number of commercial detector manufacturers specialising in X-ray and other detector systems.

This format does not imply any particular demands on processing software nor on the manner in which such software should work. Definitions of units, coordinate systems, etc. may quite different. The clear precise definitions within CIF, and hence CBF, help, when necessary, to convert from one system to another. Whilst no strict demands are made, it is clearly to be hoped that software will make as much use as is reasonable of information relevant to the processing which is stored within the file. It is expected that processing software will give clear and informative error messages when they encounter problems within CBF's or do not support necessary mechanisms for inputting a file.

1.1 CBF and 'imgCIF'

CBF and 'imgCIF' are two ascepts of the same format. Since CIF's are pure ASCII text files a separate binary format has needed to be defined to allow the combination of pseudo-ASCII headers sections and binary data sections. This binary file format is the Crystallographic Binary File (CBF). The header sections are very close to the CIF standard, but must use operating system independent "line separators". The pair of characters carriage return, line-feed has been chosen since it allows the header section to viewed with standard system utilities on a very wide range of operating systems.

imgCIF is the name of the CIF dictionary which contains the terms specific to describing the binary data. Thus a CBF uses data names from the imgCIF and other CIF dictionaries.

(Translation programs will be written which will allow the whole of the CBF to be converted to CIFs which contain ASCII encoding of the binary data. These too will use the imgCIF dictionary terms.)

2.0 A SIMPLE EXAMPLE HEADER

Before fully describing the format we start by showing a simple, but important and complete usage of the format; that of storing a single detector image in a file together with a small amount of useful auxiliary information. It is intened to be a useful example for people who like working from examples, as opposed to full definitions. It should also serve as an introduction or overview of the format defintion. This example uses CIF DDL2 based dictionary items.

The example is an image of 768 by 512 pixels stored as 16 bit unsigned integers, in little endian byte order. (This is the native byte ordering on a PC.) The pixel sizes are 100.5 by 99.5 microns. Comment lines starting with a hash sign (#) are used to explain the contents of the header. Only the ASCII part of the file is shown, but comments are used to describe the start of the binary section.

First the file is shown with the minimum of comments that a typical outputting program might add. Then it is repeated, but with "over- commenting" to explain the format.

Here is how a file might appear if listed on a PC or on a Unix system using 'more':

(Note: under the CBFlib scheme the '###_START_OF_HEADER' and '###_END_OF_HEADER' identifiers become meaningless. The '###_START_OF_BIN' and '###_END_OF_BINARY' identifier disappear, but have equivalents within the "binary string" value.)

###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0

###_START_OF_HEADER

# Data block for image 1
data_image_1

_entry.id 'image_1'

                                  
# Sample details
_chemical.entry_id                           'image_1'
_chemical.name_common                        'Protein X'

# Experimental details
_exptl_crystal.id                            'CX-1A'
_exptl_crystal.colour                        'pale yellow'

_diffrn.id                                    DS1
_diffrn.crystal_id                            'CX-1A' 

_diffrn_measurement.diffrn_id                 DS1
_diffrn_measurement.method                    Oscillation
_diffrn_measurement.sample_detector_distance  0.15 
                                                  
_diffrn_radiation_wavelength.id               L1 
_diffrn_radiation_wavelength.wavelength       0.7653 
_diffrn_radiation_wavelength.wt               1.0

_diffrn_radiation.diffrn_id                   DS1 
_diffrn_radiation.wavelength_id               L1 

_diffrn_source.diffrn_id                      DS1
_diffrn_source.source                         synchrotron
_diffrn_source.type                          'ESRF BM-14'

_diffrn_detector.diffrn_id                    DS1
_diffrn_detector.id                           ESRFCCD1
_diffrn_detector.detector                     CCD
_diffrn_detector.type                        'ESRF Be XRII/CCD'


_diffrn_detector_element.id                   1
_diffrn_detector_element.detector_id          ESRFCCD1


_diffrn_frame_data.id                         F1
_diffrn_frame_data.detector_element_id        1
_diffrn_frame_data.array_id                   'image_1'
_diffrn_frame_data.binary_id                  1


# Define image storage mechanism

loop_
_array_structure.array_id 
_array_structure.binary_id
_array_structure.encoding_type        
_array_structure.compression_type     
_array_structure.byte_order           
image_1       1   unsigned_16_bit_integer  none  little_endian
                                      
loop_
_array_intensities.array_id           
_array_intensities.linearity          
_array_intensities.undefined_value    
_array_intensities.overload_value     
image_1     linear     0      65535

# Define dimensionality and element rastering
loop_
_array_structure_list.array_id
_array_structure_list.index
_array_structure_list.dimension
_array_structure_list.precedence
_array_structure_list.direction
image_1    1      768    1    increasing    
image_1    2      512    2    decreasing     

loop_
_array_element_size.array_id
_array_element_size.index
_array_element_size.size
image_1  1  100.5e-6
image_1  2  99.5e-6

###_END_OF_HEADER

###_START_OF_BIN







###_END_OF_BINARY

###_END_OF_CBF
Here the file header is shown again, but this time with many comment lines added to explain the format:
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0

# This line starting with a '#' is a CIF and CBF comment line,
# but the first line with the three '#'s is a CBF identifier.
# The text '###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION' identifiers
# the file as a CBF and must be present as the very first line of
# every CBF file. Following 'VERSION' is the version number of the
# file. A version 1.0 CIF should be readable by any program which
# fully supports the version 1.0 CBF definitions.

# Comment lines and white space (blanks and new lines) may appear
# anywhere outside the binary sections.
  
###_START_OF_HEADER

# The '###_START_OF_HEADER' identifier defines the start of an ASCII
# header section. This where the details of the image and auxiliary
# information are defined.

# Data block for image 1
data_image_1

# 'data_' defines the start of a CIF (and CBF) data block. We've
# chosen to call this data block 'image_1', but this was an arbitary
# choice. Within a data block a data item may only be used once.

_entry.id 'image_1'
                                  
# Sample details
_chemical.entry_id                           'image_1'
_chemical.name_common                        'Protein X'

# The apostrophes enclose the string which contains a space. 

# Experimental details
_exptl_crystal.id                            'CX-1A'
_exptl_crystal.colour                        'pale yellow'

_diffrn.id                                    DS1
_diffrn.crystal_id                            'CX-1A' 

_diffrn_measurement.diffrn_id                 DS1
_diffrn_measurement.method                    Oscillation
_diffrn_measurement.sample_detector_distance  0.15 
                                                  
_diffrn_radiation_wavelength.id               L1 
_diffrn_radiation_wavelength.wavelength       0.7653 
_diffrn_radiation_wavelength.wt               1.0

_diffrn_radiation.diffrn_id                   DS1 
_diffrn_radiation.wavelength_id               L1 

_diffrn_source.diffrn_id                      DS1
_diffrn_source.source                         synchrotron
_diffrn_source.type                          'ESRF BM-14'

_diffrn_detector.diffrn_id                    DS1
_diffrn_detector.id                           ESRFCCD1
_diffrn_detector.detector                     CCD
_diffrn_detector.type                        'ESRF Be XRII/CCD'


_diffrn_detector_element.id                   1
_diffrn_detector_element.detector_id          ESRFCCD1


_diffrn_frame_data.id                         F1
_diffrn_frame_data.detector_element_id        1
_diffrn_frame_data.array_id                   'image_1'
_diffrn_frame_data.binary_id                  1

# Many more data items can be defined, but the above gives the idea
# of a useful minimum set (but not minimum in the sense of compulsory,
# the above data items are optional in a CIF or CBF).
 
# Define image storage mechanism
loop_
_array_structure.array_id 
_array_structure.binary_id
_array_structure.encoding_type        
_array_structure.compression_type     
_array_structure.byte_order           
image_1       1   unsigned_16_bit_integer  none  little_endian
                                      
loop_
_array_intensities.array_id           
_array_intensities.linearity          
_array_intensities.undefined_value    
_array_intensities.overload_value     
image_1     linear     0      65535

# Define dimensionality and element rastering

# Here the size of the image and the ordering (rastering) of the
# data elements is defined. The CIF 'loop_' structure is used to
# define different dimensions. (It can be used for defining multiple
# images.)

loop_
_array_structure_list.array_id
_array_structure_list.index
_array_structure_list.dimension
_array_structure_list.precedence
_array_structure_list.direction
image_1    1      768    1    increasing    
image_1    2      512    2    decreasing     

loop_
_array_element_size.array_id
_array_element_size.index
_array_element_size.size
image_1  1  100.5e-6
image_1  2  99.5e-6


# The 'array_id' identifies data items belong to the same array. Here
# we have chosen the name 'image_1', but another name could have been
# used, so long as it's used consistently. The 'index' component refers 
# to the dimension being defined, and the 'dimension' component defines 
# the number of elements in that dimension. The 'precedence' component
# defines which precedence of rastering of the data. In this case the
# first dimension is the faster changing dimension. The 'direction'
# component tells us the direction in which the data rasters within a
# dimension. Here the data rasters faster from minimum elements towards
# the maximum element ('increasing') in the first dimension, and more
# slowly from the maximum element towards the minimum element in the
# second dimension. (This is the default rastering order.)


# The storage of the binary data is now fully defined.

# Further data items could be defined, but this header ends with the
# '###_END_OF_HEADER' identifer.

###_END_OF_HEADER

# Here comments or white space may be added e.g. to pad out the header
# so that the start of the binary data is on a word boundary

# The '###_START_OF_BIN' identifier is in fact 32 bytes long and contains
# bytes to separate the "ASCII" lines from the binary data, bytes to
# try to stop the listing of the header, bytes which define the binary
# identifier which should be set to 1 to match the 'binary_id' defined
# in the header, and bytes which define the length of the binary
# section. In this case the length of the binary section is simply
# 768*512*2 = 786432 bytes (or more, if for some reason the binary
# section is made delibrately bigger than the binary data stored).

###_START_OF_BIN







###_END_OF_BINARY

# The '###_END_OF_BINARY' identifier must occur starting at the first
# byte after the number of bytes defined in the start of binary identifier.
# This may be used to check data integrity. (Following the '###_END_OF_BINARY'
# identifier the file is in "ASCII" mode again, so these comment lines
# are allowed.)


# The '###_END_OF_CBF' identifier signals the end of the CBF file.

###_END_OF_CBF

OVERVIEW OF THE FORMAT

This section describes the major "components" of the CBF format.
  1. CBF is a binary file, containing self-describing array data e.g. one or more images, and auxiliary data e.g. describing the experiment.

  2. It consists of pseudo-ASCII text header sections, which are "lines" of ASCII characters separated by "line separators" which are the pair of ASCII characters carriage return and line-feed (ASCII 13, ASCII 10), followed by zero, one, or more binary sections. This structure may be repeated.

  3. The very start of the file has an identification item (2). This item also describes the CBF version or level. The identifier is:
    ###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION
    
    which must always be present so that a program can easily identify whether or not a file is a CBF, by simply inputting the first 41 characters. (The space is a blank (ASCII 32) and not a tab. All identifier characters are uppercase only.)

    The first hash means that this line within a CIF would be a comment line, but the three hashes mean that this is a line describing the binary file layout for CBF. (All CBF internal identifiers start with the three hashes, and all other must immediately follow a "line separator".) No whitespace may precede the first hash sign.

    Following the file identifier is the version number of the file. e.g. the full line might appear as:

    ###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0
    
    The version number must be separated from the file identifier characters by whitespace e.g. a blank (ASCII 32).

    The version number is defined as a major version number and minor version number separated by the decimal point. A change in the major version may well mean that a program for the previous version cannot input the new version as some major change has occurred to CBF (3). A change in the minor version may also mean incompatibility, if the CBF has been written using some new feature. e.g. a new form of linearity scaling may be specified and this would be considered a minor version change. A file containing the new feature would not be readable by a program supporting only an older version of the format.

  4. Header Sections:

    1. The start of an header section is delimited by the following special identifier:
      ###_START_OF_HEADER
      
      followed by the carriage return, line-feed pair. (Another carriage return, line-feed pair immediately precedes this and all other CBF identifiers, with the exception of the CBF file identifier which is at the very start of the file.)

    2. A header section, including the identification items which delimit it, uses only ASCII characters, and is divided into "lines". The "line separator" symbols (carriage return, line-feed) are the same regardless of the operating system on which the file is written. (This is an importance difference with CIF, but must be so, as the file contains binary data, so cannot be translated from one O.S. to another, which is the case for ASCII text files.)

    3. The header section within the delimiting identification items obeys all CIF rules [1], with the exception of the line separators.

      e.g.

      • "Lines" are a maximum of 80 characters long. (For CBF it is probably best to allow for this maximum to be larger.)

      • All data names start with an underscore character.

      • The hash symbol (#) (outside a character string) means that all text up to the line separator is a comment.

      • Whitespace outside of character strings is not significant.

      • Data names are case insensitive.

      • The data item follows the data name separator, and may be of one of two types: text string (char) or number (numb). (The type is specified for each data name.)

      • Text strings may be delimited with single of double quotes, or blocks of text may be delimited by semi-colons occurring as the first character on a line.

      • The 'loop_' mechanism allows a data name to have multiple values

      Any CIF data name may occur within the header section.

    4. A single header section may contain one or more data blocks (CIF terminology).

    5. The end of the header section is delimited by the following special identifier:
      ###_END_OF_HEADER
      
      followed by carriage return, line-feed.

  5. The header section must contain sufficient data names to fully describe the binary data section(s) which follow(s).

  6. Binary Sections:

    (Under CBFlib binary sections would be replaced by "binary string" values within a data name/value pair. The structure of the proposed "binary string" is similar to the binary sections described here.)

    1. The start of a binary data section is identified by a special identifier of 32 bytes which mixes ASCII and binary such that the start of the identifier can easily be found using string search methods.

      The ASCII part is:

      ###_START_OF_BIN
      
      The full identifier is:
       Byte No. ASCII Symbol                  Byte Value (unsigned) (decimal)
                ------------
      
          1          #                               35
          2          #                               35
          3          #                               35
          4          _                               95 
          5          S                               83
          6          T                               84
          7          A                               65
          8          R                               82
          9          T                               84
         10          _                               95
         11          O                               79
         12          F                               70
         13          _                               95
         14          B                               66
         15          I                               73
         16          N                               78
         17      Form-feed                           12
         18     Substitute  (Control-Z)              26
         19     End of Transmission (Control-D)      04
         20                                         213
         21                    }    Bytes 21 - 24 define the binary section 
         22                    }    identifier. This a 32-bit unsigned little-
         23                    }    endian integer. The number is used to relate
         24                    }    data defined in the header section.
         25                      }   
         26                      }   Bytes 25 - 32 define the length of
         27                      }   the following binary section in bytes
         28                      }   as a 64-bit unsigned little-endian
         29                      }   integer. (The value 0 means the
         30                      }   size is unknown, and no other
         31                      }   pseudo-ASCII nor binary sections may
         32                      }   follow.)
      

      The binary characters serve specific purposes:

      • The form feed will separate the ASCII lines from the binary sections if the file is listed on most operating systems.

      • The Control-Z will stop the listing of the file on MS-DOS type operating systems.

      • The Control-D will stop the listing of the file on Unix type operating systems.

      • The unsigned byte value 213 (decimal) is binary 11010101. (Octal 325, and hexadecimal D5). This has the eighth bit set so can be used for error checking on 7-bit transmission. It is also asymmetric, but with the first bit also set in the case that the bit order could be reversed (which is not a known concern).

      • (The carriage return, line-feed pair at the end of the first and other lines can also be used to check that the file has not been corrupted e.g. by being sent by ftp in ASCII mode.)

      • Bytes 21-24 define the binary id of the binary data. This id is also used within the header sections, so that binary data definitions can be matched to the binary data sections. 32-bits allows many many more binary data sections to be addressed than can conceivably be needed.

      • Bytes 25-32 define the length in bytes of the binary section. This provides for enormous expansion from present images sizes, but volume and higher dimensional data may need more than 32-bit sizes in the future.

        This value may be set to zero if this is the last binary section or header section in the file. This allows a program writing, for example, a single compressed image to avoid having to rewind the file to write the size of the compressed data. (For small files compression within memory may be practical, and this may not be an issue. However very large files exist where writing the compressed data "on the fly" may be the only realistic method.) It is however recommended that this value be set, as it permits concatenation of files.

        Since the data may have been compressed, knowing the numbers of elements and size of each element does not necessarily tell a program how many bytes to jump over, so here it is stored explicitly. This also means that the reading program does not have to decode information in the header section to move through the file.

    2. The "line separator" immediately precedes the "start of binary identifier", but blank spaces may be added prior to the preceding "line separator" if desired (e.g. to force word or block alignment).

    3. The binary data does not have to completely fill the bytes defined by the byte length value, but clearly cannot be greater than this value (except when the value zero has been stored, which means that the size is unknown, and no other headers follow). The values of any unused bytes are undefined.

    4. At exactly the byte following the full binary section as defined by the length value is the end of binary section identifier. This consists of the carriage return / line feed pair followed by:
          ###_END_OF_BINARY 
      
      and followed by the carriage return / line feed pair.

      The first "line separator" separates the binary data from the pseudo-ASCII line.

      This identifier is in a sense redundant since the binary section length value tells the a program how many bytes to jump over to the end of the binary section. However, this redundancy has been deliberately added for error checking, and for possible file recovery in the case of a corrupted file.

      This identifier must be present at the end of every binary section, including sections whose length has not been exclicitly defined within the file.

  7. Whitespace may be used within the pseudo-ASCII sections prior to the "start of binary section" identifier to align the start binary data sections to word or block boundaries. Similar use may be made of unused bytes in binary sections.

    However, in general no guarantee is made of block nor word alignment in a CBF of unknown origin.

  8. The end of the file is explicitly indicated by the:
    ###_END_OF_CBF
    
    identifier (including the carriage return, line-feed pair)

  9. All binary sections in a single header section must follow the header section prior to another header section, or the end of the file.

    The binary identifier values used within a header section, and hence the immediately following binary section(s) must be unique.

    A different header section may reuse binary identifier values.

    (This allows concatenation of files without re-numbering the binary identifiers, and provides a certain level of localization of data within the file, to avoid programs having to search potentially huge files for missing binary sections.)

  10. The recommended file extension for a CBF is: cbf This allows users to recognise file types easily, and gives programs a chance to "know" the file type without having to prompt the user. Although they should check for at least the file identifier to ensure that the file type is indeed a CBF.

  11. CBF format files are binary files and when ftp is used to transfer files between different computer systems "binary" or "image" mode transfer should be selected.

3.1 SIMPLE EXAMPLE OF THE ORDERING OF IDENTIFIERS

Here only the ASCII part of the file structuring identifiers is shown. The CIF data items are not shown, apart from the 'data_' identifier which indicates the beginning of a data block.

This shows the structuring of a simple example e.g. one header section followed by one binary section. Such as could be used to store a single image.

###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0

###_START_OF_HEADER

data_

###_END_OF_HEADER

###_START_OF_BIN

###_END_OF_BINARY

###_END_OF_CBF

3.2 MORE COMPLICATED EXAMPLE OF THE ORDERING OF IDENTIFIERS

Here only the ASCII part of the file structuring identifiers is shown. The CIF data items are not shown, apart from the 'data_' identifier which indicates the beginning of a data block.

This shows the a possible structuring of a more complicated example. Two header sections, the first contains two data blocks and defines three binary sections. CIF comment lines, starting with a hash (#) are used to example the structure.

###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0

# A comment cannot appear before the file identifier, but can appear
# anywhere else, except within the binary sections.

###_START_OF_HEADER

# Here the first data block starts
data_

# The 'data_' identifier finishes the first data block and starts the
# second
data_

###_END_OF_HEADER

# The first header section is finished, but the first binary section
# does not start until the 'start of binary' identifier is found. This
# part of the file is still pseudo-ASCII.

###_START_OF_BIN

###_END_OF_BINARY

# Following the 'end of binary' identifier the file is pseudo-ASCII
# again, so comments are valid up to the next 'start of binary'
# identifier.

# Second binary section. 

###_START_OF_BIN

###_END_OF_BINARY

# Third binary section.

###_START_OF_BIN

###_END_OF_BINARY

# Second Header section

###_START_OF_HEADER

data_

###_END_OF_HEADER

# Since this the last binary section in the file, the byte length could
# optionally be set to zero, which indicates it is undefined. (All the
# other binary sections must have these values defined to allow the
# reader software to jump over sections.)

###_START_OF_BIN

###_END_OF_BINARY

###_END_OF_CBF

DATA NAME CATEGORIES

John Westbrook has proposed a number of data name categories as part of his DDL2 based "imgCIF" dictionary. This category list may be expanded to cover a structuring of the often multiple data-sets which might be used in a structurial investigation. Here we only consider the categories concerned with storing an image (or other N-dimensional topographically regular cartesian grid).

The _array_* categories cover all data names concerned with the storage of images or regular array data.

Data names from any of the existing categories may be relevant as auxiliary information in the header section, but data names from the _diffrn_ category, are likely to be the most relevant, and a number of new data names in this category are necessary.

The "array" Class of Binary Data

The "array" class is used to store regular arrays of data values, such as 1-D histograms, area-detector data, series of area-detector data, and volume data. Normally such data is regularly spaced in space or time, however spatial distorted data could nevertheless be stored in such a format. There is only one data "value" stored per lattice position, although that value may be of type complex.

The "array" class is defined by data names from the ARRAY_STRUCTURE and ARRAY_STRUCTURE_LIST categories.

Here is a short summary of the data names and their purposes.

  • _array_structure.array_id: Alpha numeric identifier for the array structure
  • _array_structure.compression_type: Type of data compression used
  • _array_structure.byte_order: Order of bytes for multi-byte integer or reals
  • _array_structure.encoding_type: Native data type used to store elements.

    e.g. 'unsigned_16_bit_integer' is used if the stored image was 16 bit unsigned integer values, regardless of any compression scheme used.

"Array" Dimensions and Element Rastering and Orientation

The array dimension sizes, i.e. the number of elements in each dimension are defined by _array_structure_list.dimension. Which takes an integer value. This is used in a loop together with the _array_structure_list.index item to define the different dimensions for one or more arrays.

Fundamental to treating a long line of data values as a 2-D image or an N-dimensional volume or hyper-volume is the knowledge of the manner in which the values need to be wrapped. For the raster orientation to be meaningful we define the sense of the view:

For a detector image the sense of the view is defined as that looking from the crystal towards the detector.

(For the present we consider only an equatorial plane geometry, with 2-theta = 0; the detector as being vertically mounted.)

The rastering is defined by the three data names _array_structure_list.index, _array_structure_list.precedence, and _array_structure_list.direction data names.

index refers to the dimension index i.e. In an image 1 refers to the X-direction (horizontal), 2 refers to the Y-direction (vertical).

precedence refers to the order in which the data in wrapped.

direction refers the direction of the rastering for that index.

We define a preferred rastering orientation, which is the default if the keyword is not defined. This is with the start in the upper-left-hand corner and the fastest changing direction for the rastering horizontally, and the slower change from top to bottom.

(Note: With off-line scanners the rastering type depending on which way round the imaging plate or film is entered into the scanner. Care may need to be taken to make this consistent.)

"Array_Structure" Examples

To define an image array of 1300 times 1200 elements, with the raster faster in the first dimension, from left to right, and slower in the second dimension from top to bottom, the following header section might be used:

# Define image size and rastering
loop_
_array_structure_list.array_id
_array_structure_list.index
_array_structure_list.dimension
_array_structure_list.precedence
_array_structure_list.direction
image_1    1      1300    1    increasing
image_1    2      1200    2    decreasing
To define two arrays, the first a volume of 100 times 100 times 50 elements, fastest changing in the first dimension, from left to right, changing from bottom to top in the second dimension, and slowest changing in the third dimension from front to back; the second an image of 1024 times 1280 pixels, with the second dimension changing fastest from top to bottom, and the first dimension changing slower from left to right; the following header section might be used:

# Define array sizes and rasterings
loop_
_ARRAY_STRUCTURE_LIST.ARRAY_ID
_ARRAY_STRUCTURE_LIST.INDEX
_ARRAY_STRUCTURE_LIST.DIMENSION
_array_structure.precedence
_array_structure.direction
volume_a    1      100    1    increasing
volume_a    2      100    2    increasing
volume_a    3       50    3    increasing
slice_1     1      1024   2    increasing
slice_1     2      1280   1    decreasing

"Array" Element Intensity Scaling

Existing data storage formats use a wide variety of methods for storing physical intensities as element values. The simplest is a linear relationship, but square root and logarithm scaling methods have attractions and are used. Additionally some formats use a lower dynamic range to store the vast majority of element values, and use some other mechanism to store the elements which over-flow this limited dynamic range. The problem of limited dynamic range storage is solved by the data compression methods byte_offsets and predictor_huffman (see next Section), but the possibility of defining non-linear scaling must also be provided.

The _array_intensities.linearity data item specifies how the intensity scaling is defined. Apart from linear scaling, which is specified by the value linear, two other methods are available to specify the scaling.

One is to refer to the detector system, and then knowledge of the manufacturers method will either be known or not by a program. This has the advantage that any system can be easily accommodated, but requires external knowledge of the scaling system.

The recommended alternative is to define a number of standard intensity linearity scaling methods, with additional data items when needed. A number of standard methods are defined by _array_intensities.linearity values: offset, scaling_offset, sqrt_scaled, and logarithmic_scaled. The "offset" methods require the data item _array_intensities.offset to be defined, and the "scaling" methods require the data item _array_intensities.scaling to be defined. The above scaling methods allow the element values to be converted to a linear scale, but do not necessarily relate the linear intensities to physical units. When appropriate the data item _array_intensities.gain can be defined. Dividing the linearized intensities by the value of _array_intensities.gain should produce counts. Two special optional data flag values may be defined which both refer to the values of the "raw" stored intensities in the file (after decompression if necessary), and not to the linearized scaled values. _array_intensities.undefined_value specifies a value which indicates that the element value is not known. This may be due to data missing e.g. a circular image stored in a square array, or where the data values are flagged as missing e.g. behind a beam-stop. _array_intensities.overload_value indicates the intensity value at which and above, values are considered unreliable. This is usually due to saturation.

"Array_intensities" Example

To define the characteristics of image_1 as linear with a gain of 1.2, and an undefined value of 0, and a saturated (overloaded) value of 65535, the following header section might be used:
# Define image intensity scaling
loop_
_array_intensities.array_id
_array_intensities.linearity
_array_intensities.gain
_array_intensities.undefined_value
_array_intensities.overload_value
image_1    linear   1.2    0   65535

DATA COMPRESSION

One of the primary aims of imgCIF / CBF is to allow efficient storage, and efficient reading and writing of data, so data compression is of great interest. Despite the extra CPU over-heads it can very often be faster to compress data prior to storage, as much smaller amounts of data need to be written to disk, and disk I/O is relatively slow. However, optimum data compression can result in complicated algorithms, and be highly data specific.

In Version 1 two types of lossless data compression algorithms are defined. In later versions other types including lossy algorithms may be added.

The first algorithm is referred to as byte_offsets and has been chosen for the following characteristics: it is very simple, may be easily implemented, and can easily lead to faster reading and writing to hard disk as the arithmetic complication is very small. This algorithm can never achieve better than a factor of two compression relative to 16-bit raw data, but for most diffraction data the compression will indeed be very close to a factor 2.

The second algorithm is referred to as predictor_huffman and has been chosen as it can achieve close to optimum compression on typical diffraction patterns, with a relatively fast algorithm, whilst avoiding patent problems and licensing fees. This will typically provide a compression ratio between 2.5 and 3 on well exposed diffraction images, and will achieve greater ratios on more weakly exposed data e.g. 4 - 5 on "thin phi-slicing" images. Normally, this would be a two pass algorithm; 1st pass to define symbol probabilities; second pass to entropy encode the data symbols. However, the Huffman algorithm makes it possible to use a fixed table of symbol codes, so faster single pass compression may be implemented with a small loss in compression ratio. With very fast cpus this approach may provide faster hard disk reading and writing than the 'byte_offsets" algorithm owing to the smaller amounts of data to be stored.

There are practical disadvantages to data compression: the value of a particular element cannot be obtained without calculating the values of all previous elements, and there is no simple relationship between element position and stored bytes. If generally the whole array is required this disadvantage does not apply. These disadvantages can be reduced by compressing separately different regions of the arrays, which is an approach available in TIFF, but this adds to the complexity reading and writing images.

For simple predictor algorithms such as the byte_offsets algorithm a simple alternative is an optional data item, which defines a look-up table of element addresses, values, and byte positions within the compressed data, and it is suggested that this approach is followed.

THE 'BYTE_OFFSETS' ALGORITHM

The byte_offsets algorithm will typically result in close to a factor of two reduction in data storage size relative to typical 2-byte diffraction images. It should give similar gains in disk I/O and network transfer. It also has the advantage that integer values up to 32 bits (31 bits unsigned) may be stored efficiently without the need for special over-load tables. It is a fixed algorithm which does not need to calculate any image statistics, so is fast.

The algorithm works because of the following property of almost all diffraction data and much other image data: The value of one element tends to be close to the value of the adjacent elements, and the vast majority of the differences use little of the full dynamic range. However, noise in experimental data means that run-length encoding is not useful (unless the image is separated into different bit-planes). If a variable length code is used to store the differences, with the number of bits used being inversely proportional to the probability of occurrence, then compression ratios of 2.5 to 3.0 may be achieved. However, the optimum encoding becomes dependent of the exact properties of the image, and in particular on the noise. Here a lower compression ratio is achieved, but the resulting algorithm is much simpler and more robust.

The byte_offsets algorithm is the following:

  1. The first element of the array is stored as a 4-byte signed two's integer regardless of the raw array element type. The byte order for this and all subsequent multi-byte integers is little_endian regardless of the native computer architecture i.e. the first byte is the least significant, and the last byte the most. This value is the first reference value ("previous element") for calculating pixel to pixel differences.

  2. For all elements, including the first element, the value of the previous element is subtracted to produce the difference. For the first element on a line the value to subtract is the value of the first element of the previous line. For the first element of a subsequent image (or plane) the value to subtract is the value of the first element of the previous image (or plane).

  3. If the difference is less than +-127, then one byte is used to store the difference as a signed two's complement integer, otherwise the byte is set to -128 (80 in hex) and if the difference is less than +-32767, then the next two bytes are used to store the difference as a signed two byte two's complement integer, otherwise -32768 (8000 in hex, which will be output as 00 80 in little-endian format) is written into the two bytes and the following 4-bytes store the difference as a full signed 32-bit two's complement integer.

  4. The array element order follows the normal ordering as defined by the _array_structure_list entries index, precedence and direction.

It may be noted that one element value may require up to 7 bytes for storage, however for almost all 16-bit experimental data the vast majority of element values will be within +-127 units of the previous element and so only require 1 byte for storage and a compression factor of close to 2 is achieved.

The PREDICTOR_HUFFMAN ALGORITHM

Section to be added.

OTHER SECTIONS

Other sections will be added.

9.0 REFERENCES

1. S R Hall, F H Allen, and I D Brown, "The Crystallographic Information File (CIF): a New Standard Archive File for Crystallography", Acta Cryst., A47, 655-685 (1991)

10.0 NOTES

(1) A pure CIF based format has been considered inappropriate given the enormous size of many raw experimental data-sets and the desire for efficient storage, and reading and writing.

(2) Some simple method of checking whether the file is a CBF or not is needed. Ideally this would be right at the start of the file. Thus, a program only needs to read in n bytes and should then know immediately if the file is of the right type or not. I think this identifier should be some straightforward and clear ASCII string.

The underscore character has been used to avoid any ambiguity in the spaces.

(Such an identifier should be long enough that it is highly unlikely to occur randomly, and if it is ASCII text, should be very slightly obscure, again to reduce the chances that it is found accidently. Hence I added the three hashes, but some other form may be equally valid.)

(3) The format should maintain backward compatibility e.g. a version 1.0 file can be read in by a version 1.1, 3.0, etc. program, but to allow future extensions the reverse cannot be guaranteed to be true.


EXAMPLE CBF


This page has been produced by Andy Hammersley (E-mail: hammersley@esrf.fr). Further modification is highly likely.