Archive for October, 2010

Interval partitioning with automated cleanup

For large databases partitioning is not only great, it is necessary. Unfortunately it adds new problems. One such problem is how to create new partitions so that they are available when they are needed. It often requires the use of an overflow partition, moving data to the real partitions when they come online. It also tends to require priviliges not granted to developers.

With Oracle 11g this becomes much easier. With the new interval partitioning scheme the database automatically creates new partitions when they are needed. A template partition is created from the start; the system takes care of the rest.

For example, to create a new partition for each day:

create table test_tab (
  created_date   date default sysdate
 ,partition_key  date generated always as (trunc(created_date)) virtual
 ,test_col_1     number
) partition by range (partition_key)
    interval (NUMTODSINTERVAL(1,'DAY')) (
    partition test_tab_initial_part
      values less than (to_date('20100121', 'YYYYMMDD')

Note that a virtual column is used for the partitioning key; another 11g feature. Try it out:

insert into test_tab (test_col_1) values (1);
select partition_name, high_value
  from user_tab_partitions where table_name = 'TEST_TAB';

--------------------- ----------------------------
SYS_P42               TO_DATE(' 2010-10-29 ...

A new partition has been created on the fly.

While this is great it solves only half the problem. The old partitions must be dropped eventually. How can we automate that?

First we need to find the partitions that can be dropped. This is a bit tricky as the high_value column in user_tab_partitions is a LONG that contains a SQL expression rather than a value we can use in comparisons. Our first task is to write a function that converts a high_value to a date:

create or replace function get_high_value_as_date(
  p_table_name     in varchar2,
  p_partition_name in varchar2
) return date as
  v_high_value varchar2(1024);
  v_date        date;
  select high_value into v_high_value from user_tab_partitions
    where table_name = upper(p_table_name)
      and partition_name = upper(p_partition_name);
  execute immediate 'select ' || v_high_value || ' from dual' into v_date;
  return v_date;

Armed with the conversion function we can loop over the partitions for our table and drop the obsolete ones:

  c_days_to_keep constant integer := 31;
  x_last_partition exception;
  pragma exception_init(x_last_partition, -14758);
  for rec in (select table_name, partition_name
    from user_tab_partitions
      where table_name = 'TEST_TAB'
        and get_high_value_as_date(table_name, partition_name) <
        sysdate - c_days_to_keep) loop
      execute immediate 'alter table ' || rec.table_name || 
        ' drop partition ' || rec.partition_name;
      when x_last_partition then
  end loop;

Note that we cannot drop the template partition.

Categories: Oracle

Map the Oracle ESB repository as a drive with Windows XP

The ESB in Oracle SOA Suite publishes its resources as a WebDAV repository. That means that it can be mapped as a Windows drive or mounted in Linux. This can be very useful as the ESB (intentionally) keeps all files when a service is deleted. They must be removed manually. With a network drive that becomes child’s play.

It should be possible to define a network drive as follows:

net use x: ""

That fails with:

System error 67 has occurred.

The same thing happens from the user interface in Windows Explorer. Fortunately there is a solution – add a network place instead. Select “Tools/Map Network Drive”. Click on the link “Sign up for online storage or connect to a network server”:

Click next until prompted for the address. Enter the host and port for the ESB server followed by “/esb/slide”:

Give the new network place a name. This will become the name of a folder where the repository can be found:

Finish the wizard. There will be a new folder present in “My Network Places”. It can be copied or moved somewhere else just like a normal folder. Expand it and voila – there are the ESB files.

Categories: SOA Suite

Pitfall with active-passive oc4j

We recently had some issues with SOA Suite using an active-passive oc4j configured in the usual way with service-failover and service-weight added and numprocs removed.

Service-failover added, numprocs removed

Extract from opmn.xml

Failover worked, but sometimes opmn started BOTH instances. Only one should be running. To produce the error:

node 1: opmnctl shutdown
(wait until down)
node 2: opmnctl shutdown
(wait until down)
node 2: opmnctl startall
(wait until up)
node 1: opmnctl startall
(wait until up – running on node 2, stopped on node 1)
node 2: opmnctl stopproc process-type=oc4j_apsa
(wait until down – down on node 2, running on node 1)
node 1: opmnctl stopproc process-type=oc4j_apsa
(wait until down – stopped on both)
node 2: opmnctl startproc process-type=oc4j_apsa
(wait until up – started on BOTH nodes!!!)

The solution was simple, though not obvious. It is not good enough to specify process-type or gid, one has to supply ias-component as well! To start:

opmnctl startproc ias-component=apsa_group process-type=oc4j_apsa

To stop:

opmnctl stopproc ias-component=apsa_group process-type=oc4j_apsa

To debug the odd behaviour opmnctl debug was invaluable.

Categories: SOA Suite

AQ bulk enqueue/dequeue

Bulk operations can improve performance significantly and (starting with Oracle 10g) it is possible to use them with AQ. An excellent guide can be found here.

Categories: Database, Oracle