Skip to the main content.
Downloads Try Thriftly
Downloads Try Thriftly
Group 762

Migrate and run DataFlex applications with Oracle, MS SQL Server, PostgreSQL, MySQL &  MariaDB.

flex2Crystal

Stuck in Crystal XI?  Upgrade and use the latest versions of Crystal Reports with DataFlex applications. 

BTR2SQL

Convert from Btrieve / P.SQL / Actian transactional engines to Oracle, MS SQL Server, and PostgreSQL

thriftly-1

Quickly build multi-protocol web services with the same API. Supports JSON-RPC, REST, SOAP,  Thrift, and gRPC.

 Group 671-1

 

Why Mertech?

5 min read

Improved BLOB Support in Flex2SQL v14.0

For many years, the Mertech drivers have had a chunking BLOB interface for interacting with (B)inary (L)arge (OB)jects. This interface was made up of two basic methods of interacting with a BLOB: Load From File, and Chunked access. The Load from File method allowed a locally stored file to be written into a column using a Mertech command.


The Chunked access allowed reading and writing to a BLOB column 16KB at a time. These commands had limitations that drove us to rewrite our BLOB support for version 14. Specifically, the commands had complex syntax, and in the case where larger BLOB columns needed to be accessed (Megabytes vs Kilobytes), our old commands weren't as efficient as we wanted them to be. Our new BLOB interface addresses both of these issues along with allowing a couple of cool new use cases.

Single Command Interface

The simplest use case for our new BLOB interface is to read or write into a BLOB using a single command. Rather than just allowing access to the BLOB directly via the file buffer, we have created new commands to allow access to BLOB columns. The reason we took this approach is to minimize the memory footprint of your application. If we allowed direct access via the file buffer, the memory footprint of your application would grow based on the maximum size of the data that you predict could be stored in your BLOB column since you'd need to tell us beforehand how big of a value to allow in the BLOB column.

Our solution requires minimal additional memory, and that memory is only needed when the BLOB is accessed instead of all the time. Using the single command interface to read from a BLOB is very simple:

Clear Customer

Move 100 to Customer.ID

Find EQ Customer By Index.1

if (Found) Begin

    Integer iValLength iOldArgSize

    // Start by getting the size of the BLOB stored in this row
    SQL_GET_LOB_LENGTH Customer.Picture to iValLength

    // Make sure DataFlex is currently set to handle a string this large
    Get_Argument_Size to iOldArgSize
    if (iValLength > iOldArgSize) Set_Argument_Size iValLength

          String sPicture
    UChar[] ucPicAsArray
    Address pStr

    SQL_GET_LOB Customer.Picture to sPicture

    // Most COM Imaging controls that can accept an image from memory
    // are going to want it stored in a Unsigned Char Array, so
    // allocate an array of the proper size and then convert our string
    // to a UChar[]
    Move (ResizeArray(ucPicAsArray,iValLength)) to ucPicAsArray
    Move (AddressOf(ucPicAsArray)) to pStr
    Move sPicture to pStr

    // Do something with it!!!!

    // Set the DataFlex Argument size back to what it was so we aren't
    // hogging memory
    Set_Argument_Size iOldArgSize
End


Note how much of the code is making sure DataFlex can handle whatever is stored in the BLOB column. The actual reading of the BLOB is handled in just one command, based on there being a valid record in the file buffer. If you are reading from a BLOB column that will never contain more data than the default argument size for your version of DataFlex, you can avoid all the code related to argument size and just read from the BLOB field using SQL_GET_LOB. Placing the contents into a UChar[] is not required. It is included here only for your reference in case you need to do something similar with the contents of the BLOB. Writing to a BLOB this way is also very simple:

Clear Customer

Move 100 to Customer.ID

Find EQ Customer By Index.1

if (Found) Begin

  // Code to load the sPicture variable from wherever. This might also
  // require dealing with Get_Argument_Size/Set_Argument_Size.
  Reread Customer
     SQL_SET_LOB Customer.Picture to sPicture
     Saverecord Customer
  Unlock
End


Often BLOB columns will either contain data, or be NULL. You can of course set a column to NULL using embedded SQL, but since you may want to allow a BLOB column to be cleared out in relation to a DD operation, we also added a simple command to set a BLOB column to NULL. This also is very easy to use:

Clear Customer

Move 100 to Customer.ID

Find EQ Customer By Index.1

if (Found) Begin

    Reread Customer

        SQL_SET_LOB_NULL Customer.Picture

        Saverecord Customer

    Unlock

End


Using just these 4 commands you can most anything you need with BLOBs in DataFlex. This new interface is significantly faster, more logical, compatible with DDs, and easier to use. But there are some occasions where this interface may not quite suit your needs. To handle the remaining use cases, we have also provided a Chunking version of this interface.

Chunking Command Interface

There are a few occasions where reading from, or writing to a BLOB column in a single operation may be problematic. For instance, if the BLOB data is very large, it may be impractical to allocate such a large string to hold the entire BLOB. Another possible use for the chunking interface relates to web applications. In the case of a web application, a client might send chunks of data in separate calls to the server. Without a chunking interface, you would need to assemble these chunks using some clever programming or some 3rd party tool and then write that re-assembled data into your BLOB column all in one operation. Using our Chunking interface you can commit the chunks of data one at a time as they are received directly in a DataFlex function. Each chunk of data can be an arbitrarily large or small size based on your needs.

Reading from a BLOB in chunks is very easy:

Integer iImageLen

Integer iOldVal iChunk iSize iSoFar iChunkSize

Clear Customer

Move 100 to Customer.ID

Find EQ Customer by Index.1

if (Found) Begin

    / / For this example we'll use a 2MB chunk size.

    Move 2097152 to iChunkSize

    SQL_GET_LOB_LENGTH customer.picture to iImageLen

    If (iImageLen > 0) Begin
        Get_Argument_Size to iOldVal
        Set_Argument_Size iChunkSize

        String sChunk

        Repeat
         // Read a chunk of data starting at offset X for length Y
         // If this is the last chunk and it's smaller than iChunkSize
         // the sChunk variable will be padded out to the Argument_Size
         // (which is also the chunk size in this case) with Character(0).
         // Since we know the length of the blob in total, we can easily
         // calculate how much the last chunk will contain of actual data.
         SQL_GET_LOB_CHUNK salesp.picture to sChunk OFFSET (iChunkSize *iChunk) Length iChunkSize

         // Do Something With the chunk of data!!!

         Increment iChunk
         Add iChunkSize to iSoFar
    Until (iSoFar > iImageLen)

    Set_Argument_Size iOldVal
  End
End


Writing binary data from a file into a BLOB using chunks:

Clear Customer
Move 100 to Customer.ID
Find EQ Customer by Index.1
if (Found) Begin
    Integer iFileSize iChunkSize iOldVal

    // For this example, use a 1MB chunk size
    Move 1048576 to iChunkSize
    Get_Argument_Size to iOldVal
    Set_Argument_Size iChunkSize

    String sSaveFile sData

    Direct_Input channel 9 ("BINARY:" + sSaveFile)
    Set_Channel_Position 9 to -1
    Get_Channel_Position 9 to iFileSize
    Set_Channel_Position 9 to 0

    Read_Block sData iChunkSize

    Reread Customer
      // Write the first chunk, this forces an overwrite of whatever is in
      // the BLOB column right now.
      SQL_SET_LOB customer.picture to sData

      If (iFileSize > iChunkSize) Begin
        Repeat
            Read_Block sData iChunkSize
            // Append chunks of data to the BLOB
            SQL_ADD_LOB_CHUNK customer.picture to sData
         Until (SeqEof)
       End

       // In this case, we save after appending all the chunks. You could
         // alternatively do a Saverecord after each chunk is added. Note
         // that even though the BLOB may be much larger than argument_size, 
         // you don't have to worry about this since the Mertech Driver handles
         // the buffering internally.
       Saverecord Customer

  Unlock
  Close_Input

  Set_Argument_Size iOldVal
End


Note that in the above example, the ENTIRE BLOB is being buffered by the Mertech Driver before it is committed by the Saverecord command. In the case of a very large BLOB this could be problematic as well. You can further reduce the memory footprint of the above code by doing a Saverecord after each chunk is written. This will clear the internal buffer the driver allocates for the BLOB.

Data Dictionary Support

One of the major advantages of this new BLOB interface is support for use inside of a Data Dictionary. With the older BLOB interface, the command to load a chunk of data into a BLOB column also forced a Saverecord. This made it difficult to use inside of a Data Dictionary. But with the new interface, you can use the commands to write directly into a BLOB during Save_Main_File and then when that procedure is forward sent, the Data Dictionary will issue a Saverecord internally and the BLOB data will be saved. This not only makes BLOB operations easy to do inside a Data Dictionary, it makes them transactional! So for instance, if you load up a BLOB during Save_Main_File and then a dependent operation on a parent fails for some reason, the entire transaction will be rolled back and the BLOB will not get written.

Reading from BLOB columns is also easy to do right inside of Data Dictionary thanks to the OnPostFind event that the Data Dictionary triggers in newer versions of VDF. In older versions the New_Current_Record method can be used to achieve the same effect.

Conclusion

The new BLOB interface introduced in v14 of the driver is not only easier to use, but much faster. In comparison to the older BLOB interface, we've seen speed improved in the 20x range for reading and writing multi-megabyte BLOBs. Because this new BLOB interface works with Transactions and the Saverecord command, it can be used very easily right inside of a Data Dictionary to read a BLOB (during OnPostFind) or to write a BLOB (during Save_Main_File). This alone makes BLOBs a first-class feature in the Mertech driver. We can't wait to see how you put this new interface to use!

 

Hybrid Cloud Migration: Plan, Process and Advantages

Hybrid Cloud Migration: Plan, Process and Advantages

This post was co-authored with Riaz Merchant, President/CEO at Mertech Data Systems, Inc.

Read More
Financial Benefits of Cloud Migration & Hybrid Cloud Applications

Financial Benefits of Cloud Migration & Hybrid Cloud Applications

Shifting from your traditional legacy systems to the Cloud can be a game changer, as the benefits of cloud migration are numerous. Cloud computing...

Read More

AWS Application Modernization: Best Practices & Tools

In the age of digital transformation, businesses are increasingly moving away from traditional on-premises systems, steering towards more dynamic,...

Read More