Built-in timestamp for row changes in Oracle

I tend to include create and last change timestamps in most tables when I build something, but how can you see when a record was changed if they are missing or cannot be trusted (maintained by application code rather than triggers and someone has updated the database directly)? It turns out that Oracle has a nifty feature for this, the ora_rowscn pseudo-column. It reports the system change number for a row or block and that can be converted into a timestamp. For example:

create table test_row_scn (
  t_id number,
  constraint pk_test_row_scn primary key (t_id)
) rowdependencies;
insert into test_row_scn values (1);

select scn_to_timestamp(ora_rowscn) from test_row_scn where t_id = 1;

The rowdependencies option makes the table track changes for each row. That costs 6 bytes per row. Without it the query still works, but it returns the system change number for the block rather than the individual row and that may be a bit misleading.

Categories: Oracle

Java 9 adds Yet Another Logging framework (YAL)

Java 9 and JEP 264 addresses a gaping whole in the Java ecosystem: the lack of logging frameworks and abstractions. Finally there is a logging API built into the platform for those who don’t like the other logging API already built into the platform!

The java.util.logging option is commonly referred to as JUL. Why not call the new one YAL?

Most Java EE applications these days ship with third-party components that use JUL, SLF4J, commons.logging, Log4j and quite probably several additional frameworks for good measure. Unified logging is by no means trivial. It is a mess and with YAL we have one more framework and probably several additional adapters.

I can see the need to isolate the core classes from JUL, but I’m still not convinced that this is what Java needs. Why not spend more time on my pet feature, value types, instead?

Categories: Java

Enable JCA resource adapter statistics in JBoss EAP 6

Performance tuning is guesswork without good instrumentation. Knowing the bottleneck is at least half the battle. Thus it is always nice to graph connection pool statistics for databases and resource adapters. With EAP 6, statistics can be enabled in the connection-definition element, for example:

      jndi-name="java:/QCF" enabled="true" pool-name="QCF"

Unfortunately it doesn’t work, most metrics (such as InUseCount) are reported as 0. Why? Apparently this is a bug that has been fixed in EAP 7. However, for 6.4.8+ statistics can be enabled by setting the system property -Dorg.jboss.as.connector.deployers.ra.enablePoolStatistics=true. Do it now…

Categories: Java, Performance

Read the Oracle alert log using SQL

In 11g (yes, that was ages ago) Oracle added x$dbgalertext, making it possible to query the alert log using SQL. For people like me who work close to the database, but often without true DBA access and seldom with access to the physical servers, that is a godsend. Being able to see the alerts without asking really helps. For example, to get all the reported ORA errors for the last hour:

select originating_timestamp, message_text
  from x$dbgalertext
  where originating_timestamp > sysdate - 1/24
  and message_text like '%ORA-%'
  order by originating_timestamp desc;

Talking a DBA into granting select is much easier than getting access to the server.

Categories: Oracle

Negative queue depths in ActiveMQ

For quite some time we battled with negative and sometimes false positive queue depths in ActiveMQ and the commercial AMQ equivalent. Apparently there is a race condition somewhere in the broker. It is usually easy to see when it has happened in the graphs, as the number of messages on a queue suddenly jumps up or down dramatically in an instant. It does play havoc with monitoring, though. How can you set alarms if you can’t trust the reported queue depth? Restarting the broker fixes the counter, but that is not a viable solution.

Fortunately there is a configuration option that solves this. Define a policyEntry for the affected queues/topics and set useCache="false". That is a fairly large change, but it works.

Categories: Java

VMWare vs VirtualBox vs Parallels for DevOps

What is the best virtualization engine for Mac OS X? The answer depends on who you are. As always these days, I started by checking what Google has to say. After some reading I decided on VMWare. Solid with a long track record, the best support for uncommon guest platforms and the best performance. However, after a short time I realized it was not for me.

Most comparisons focus on users with limited IT experience who want to run Windows as seamlessly as possible. They want Windows programs to work as if they were Mac programs. VMWare and Parallels both beat VirtualBox with ease on that score, but I really don’t care.

What I’m looking for as a DevOps (not part of my job description, but in practice that’s what I’ve been doing for as long as I can remember) is good support for running multiple servers with other operating systems (mostly Linux) with full control over the finer details. There VirtualBox shines.

A simple example is port forwarding for NAT. With VirtualBox I can open a new tunnel into a machine with a few clicks without restarting. With VMWare on Mac that is not possible at all through the user interface. It seems to be supported by editing a hidden configuration file, but then it seems to be global rather than limited to a specific machine? Not sure, didn’t go that route. Options for hard disks are also much richer in VirtualBox.

In short I found that the free VirtualBox engine beats the commercial alternatives for my use cases. What do you think?

Categories: Virtualization

Native memory leaks in Java

By request, here is the presentation I held at JavaForum on native memory leaks in Java.

Categories: Java