Archive

Archive for February, 2010

Coding and decoding base64 in PL/SQL

I recently needed to decode large base64 payloads in Oracle 10g and save the resulting blobs in a table. Unfortunately the built-in package supports only regular types, not lobs. I searched a bit, but while I got a few pointers I found no complete code. I wrote a function solving the specific problem for the customer, but then decided to write a more complete solution at home.

The package contains two functions:

create or replace package utl_base64 is
  function decode_base64(p_clob_in in clob) return blob;

  function encode_base64(p_blob_in in blob) return clob;
end;

Pretty straightforward. The implementation is a bit more involved:

create or replace package body utl_base64 is
  function decode_base64(p_clob_in in clob) return blob is
    v_blob blob;
    v_result blob;
    v_offset integer;
    v_buffer_size binary_integer := 48;
    v_buffer_varchar varchar2(48);
    v_buffer_raw raw(48);
  begin
    if p_clob_in is null then
      return null;
    end if;
    dbms_lob.createtemporary(v_blob, true);
    v_offset := 1;
    for i in 1 .. ceil(dbms_lob.getlength(p_clob_in) / v_buffer_size) loop
      dbms_lob.read(p_clob_in, v_buffer_size, v_offset, v_buffer_varchar);
      v_buffer_raw := utl_raw.cast_to_raw(v_buffer_varchar);
      v_buffer_raw := utl_encode.base64_decode(v_buffer_raw);
      dbms_lob.writeappend(v_blob, utl_raw.length(v_buffer_raw), v_buffer_raw);
      v_offset := v_offset + v_buffer_size;
    end loop;
    v_result := v_blob;
    dbms_lob.freetemporary(v_blob);
    return v_result;
  end decode_base64;

  function encode_base64(p_blob_in in blob) return clob is
    v_clob clob;
    v_result clob;
    v_offset integer;
    v_chunk_size binary_integer := (48 / 4) * 3;
    v_buffer_varchar varchar2(48);
    v_buffer_raw raw(48);
  begin
    if p_blob_in is null then
      return null;
    end if;
    dbms_lob.createtemporary(v_clob, true);
    v_offset := 1;
    for i in 1 .. ceil(dbms_lob.getlength(p_blob_in) / v_chunk_size) loop
      dbms_lob.read(p_blob_in, v_chunk_size, v_offset, v_buffer_raw);
      v_buffer_raw := utl_encode.base64_encode(v_buffer_raw);
      v_buffer_varchar := utl_raw.cast_to_varchar2(v_buffer_raw);
      dbms_lob.writeappend(v_clob, length(v_buffer_varchar), v_buffer_varchar);
      v_offset := v_offset + v_chunk_size;
    end loop;
    v_result := v_clob;
    dbms_lob.freetemporary(v_clob);
    return v_result;
  end encode_base64;
end;

There are a few important points about this. The functions expect the base64 payload to have no newlines, as that is how it is passed down from BPEL. If there are newlines present the result will most likely be garbage. In addition the buffer size should be less than 64 (the built-in package adds newlines at that point) and must be evenly divided by 4.

Advertisements
Categories: Database, Oracle

Read consistency with RAC

High-end systems often use Oracle RAC as their backend. It is certainly powerful, but also a bit complex and (as usual) it pays to know what knobs to turn. One parameter in particular has caused me some grief: max_commit_propagation_delay, which defaults to 7 seconds. The good news is that it was deprecated in 10gR2 in favour of _immediate_commit_propagation, which defaults to true.

I have no practical experience with the new option, but the old one determines how transactions committed on one instance are propagated to the others. If it is given a non-zero value that basically means that it is acceptable with a short delay before a commit is globally visible. For a busy multi-threaded application working with a load-balancing connection pool that can spell disaster. The default value of 7 seconds is an eternity in such an application. Even a human user that saves a record and then tries to find it again can run into problems if the two operations are sent to different instances.

Setting the parameter to zero affects performance and the cost increases with the number of instances. Still, correctness is usually not optional, so if you are using Oracle RAC with an old database version be sure to check the parameter and make an informed decision. Some applications can live with delayed propagation, but most probably require a commit to be global.

Categories: Database, Oracle

VMWare networking disabled with McAfee

Our company recently switched from Norton to McAfee. I don’t know if it is the product itself or the corporate security policy it enforces, but since then the host-only and NAT networks in VMWare no longer work for me. I have tried everything without luck. The networks are listed as connected in Windows, but are not listed by ipconfig and the adapters are disabled and cannot be enabled in VMWare. Attempts to repair them fail with the message that TCP/IP is disabled for them, although it is not according to the registry and configuration screens. Very frustrating.

The bridged network still works, but it shuts down when there is no network cable attached. Not exactly optimal for a laptop.

Finally I have found a workaround if not a solution. VMWare recognizes Microsoft’s loopback adapter as a physical adapter that can be bridged. Simply install it, give it a fixed IP address and configure VMWare to use it. On the road the bridged loopback adapter is as good as the host-only networking option and when a real network (wired or wireless)  is available the loopback adapter can simply be disabled and the bridge reconfigured to the physical device. Some extra work to be sure, but it gets the job done.

Categories: Virtualization

Minimal VMWare images

In my line of work I typically have ten or more VMWare images with me on my laptop, each loaded with a different application server or database combination. That means disk space is at a premium. Unfortunately the images available for download are often wasteful, so I often have to tune them myself. This is a short summary of how to do it. I once had to spend a whole week (!) waiting for a download of a 70G image to complete, only to trim it to a small fraction of the original size when done. Some time spent pruning before publishing can be a wise investment.

I’m using VMWare Server 1.0.8, but most of the steps are generic.

If you create the image from scratch, be sure not to preallocate disk space and to split the disks into 2G files. That will make your life easier. The goal here is to make a lean developer image, not a performance-critical production image.

The first step for a finished image is to remove unused files and programs. This should be obvious, but it is often missed. For example, the installation files for server software may be quite large and are often left behind. Temporary files in browser caches and the OS temp directory can also add up, not to mention old log files. Once I even found a full Office installation. Uninstall and delete!

Next it is time to deal with the swap files. If they are located on the main disk(s) they should be cleared before the image is compressed. Ideally I like to put them on a separate disk that can be excluded from the archive – the swap files will not be available, but can be recreated when the image is restored. To clear the swap file on shutdown in Windows, start regedit and set HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutDown to 1. With Linux I usually remove the swap partition and recreate it when I need it instead.

What next? Defragment the remaining files. This can be done in two ways. First, defragment inside the operating system. When that has been done, shutdown and defragment with VMWare:

  vmware-vdiskmanager.exe -d [virtual disk].vmdk

When the disks have been defragmented they should be wiped clean. If the unused space is zeroed out it will compress better. For Windows I normally use sdelete:

  sdelete -c c:

For Linux, this should work:

  cat /dev/zero >largefile; remove largefile

In both cases, repeat for all partitions. Finally, shutdown the system again and shrink the disks:

  vmware-vdiskmanager.exe -k [virtual disk].vmdk

Create a rar-file with maximum compression, delete the directory and you are done. Feel free to compare the result before and after tuning. There should be a significant gain in disk space. Any other suggestions? Please let me know.

Categories: Virtualization

Oracle B2B bulk resubmit with SQL

In 11g it will be possible to resubmit messages from the user interface, but in 10g that feature was missing. In early 2008 some resubmit support was added,  see
http://www.b2bgurus.com/2008/02/resubmission-feature.html. It included automatic retries and manual resubmit using a command line tool and it made it possible to write more sophisticated tools supporting both inbound and outbound resubmission.

However, if thousands of messages are to be resubmitted based on some criteria nothing beats SQL. Luckily it is not that difficult to write a SQL script that resubmits messages that match custom search criteria.

For outbound messages it has always been fairly easy. All the information needed in order to create a new identical message and put it on the internal delivery channel for processing can be found in the b2b_instancemessage view. See Tech Note #004 for details.

For inbound messages it is harder, but there the resubmit support in the product helps. We can do what the command line tool does. Note that this is in no way supported by me or by Oracle; be sure to try it out on a test system before using it in production! It will probably not work as is with 11g either.

The script below finds matching messages, flags them for resubmit and alerts B2B to process them using the event queue. The condition can be quite complex, selecting exactly the inbound messages that are to be resubmitted.

set serveroutput on;
declare
  msg                 b2b_event_type;
  enqueue_options     dbms_aq.enqueue_options_t;
  message_properties  dbms_aq.message_properties_t;
  subscribers         dbms_aq.aq$_recipient_list_t;
  msg_handle          raw(16);
  n                   number(10);
begin
  dbms_output.put_line ('Resending messages...');
  begin
    subscribers(1) := sys.aq$_agent('b2buser', null, null);
    message_properties.recipient_list := subscribers;
    n := 0;
    for rec in (select b2bwiremessageid from b2b_instancemessage
      where direction = 'Inbound'
      -- ENTER CONDITION HERE ------------------------------------
      and b2btimestamp between sysdate - 20 and sysdate - 1
      and documenttype like 'INVOIC%'
      -- END OF CONDITION ----------------------------------------
    ) loop
      update tip_wiremessage_rt
        set protocoltransportbinding = 'B2BRetry=true
' || protocoltransportbinding
        where protocoltransportbinding not like '%B2BRetry=%'
        and b2bwiremessageid = rec.b2bwiremessageid;
      msg := b2b_event_type(event_type => 3, event_id => 24,
                            id => rec.b2bwiremessageid);
      dbms_aq.enqueue(queue_name  => 'B2B_EVENT_QUEUE',
                      enqueue_options => enqueue_options,
                      message_properties => message_properties,
                      payload => msg,
                      msgid => msg_handle);
      n := n + 1;
    end loop;
    dbms_output.put_line ('Processed ' || n || ' messages');
    dbms_output.put_line ('Commit or rollback now!');
  exception
    when others then
      dbms_output.put_line ('SQL Error: ' || SQLCODE ||
                            ' - ' || SQLERRM);
      rollback;
  end;
end;
/

Note that the newline after B2BRetry=true is intentional and required!

Again, this is not officially supported in any way, but perhaps it can save your skin if something has gone wrong somewhere and you need to get those messages in fast. Just remember not to run it in production before trying it out in a safe environment.

Categories: B2B, Database, Oracle

Configuring the Oracle FTP adapter for SFTP with ssh-dsa

The FTP adapter in Oracle SOA Suite (10.1.3.3) supports secure transfers with SFTP. It supports both password authentication and public key authentication with rsa and dsa keys.  The documentation describes in detail how to setup public key authentication with ssh-rsa:

<config-property name="useSftp" value="true"/>
<config-property name="authenticationType" value="publickey"/>
<config-property name="preferredKeyExchangeAlgorithm" value="diffie-hellman-group1-sha1"/>
<config-property name="preferredCompressionAlgorithm" value="none"/>
<config-property name="preferredDataIntegrityAlgorithm" value="hmac-md5"/>
<config-property name="preferredPKIAlgorithm" value="ssh-rsa"/>
<config-property name="privateKeyFile" value="C:\my-secured-folder\id_rsa"/>
<config-property name="preferredCipherSuite" value="blowfish-cbc"/>
<config-property name="transportProvider" value="socket"/>

Unfortunately it is not quite so easy to use a dsa certificate. At first glance it should be enough to change preferredPKIAlgorithm to ssh-dsa and point privateKeyFile to a private dsa key, but that is not the case. The correct value for preferredPKIAlgorithm is ssh-dss:

<config-property name="preferredPKIAlgorithm" value="ssh-dss"/>

That is not what the documentation says, but it works. Naturally that may change in future patches, but for now there you are!

Categories: SOA Suite