Archive

Author Archive

Delete StatefulSet in Kubernetes fails with NotFound

I have created a stateful set with MongoDB databases in Kubernetes with AKS. In order to test the persistent volumes I tried to delete it, but to my surprise that failed:

Error from server (NotFound): the server could not find the requested resource

What? Getting the resource worked, but kubectl describe failed. What to do? I finally found out that my kubectl client was incompatible with the server. Apparently Kubernetes supports one version higher or lower, but kubectl is at 1.10 and the server at 1.8.

Advertisements
Categories: Kubernetes

Terminal in docker exec a mess

Fairly recently the terminal for docker exec has started to misbehave. When I enter a running Wildfly container to view the logs the output becomes garbled with less, vi and other tools. It appears the reason is that the terminal size is not detected (#33794). The solution on Linux/Mac is to pass in the size:


docker exec -e COLUMNS="`tput cols`" -e LINES="`tput lines`" -it imagename

Unfortunately that doesn’t work on Windows. The mode command can provide the columns, but not the number of lines, which must be hard-coded.

Categories: Docker

Azure SQL Server VM backup failures

SQL Server VMs in Azure can be backed up using the SQL Server configuration blade in the portal. When this has been enabled the logs can be swamped by errors similar to:

2018-06-04 11:44:59.88 Backup BACKUP failed to complete the command BACKUP LOG master. Check the backup application log for detailed messages.

Fear not, it is only temporary. A SQL Server backup consists of two parts: a full backup of the database(s) and transaction logs. When no full backup has been done the transaction log backups fail with the error above. The errors should go away when the first full backup has been completed successfully.

As a side note there is no “application log”, the message refers to the calling application and in this case that is the Azure SQL Server extension. It doesn’t log anything useful.

Categories: Database

Biztalk 2016 wants to disable private key protection

Biztalk 2016 failed to receive AS/2 messages with this error:

The MIME encoder failed to sign the message because the certificate has private key protection turned on or the private key does not exist. Please disable private key protection to allow BizTalk to use a certificate for signing.

Sounds straightforward except that private key protection was disabled already. And the user profile for the Biztalk user was loaded, nothing wrong there. I went through the documentation several times, nothing wrong. Finally I found that it was the cryptographic provider.

Basically the problem is that Biztalk 2016 still relies on the ancient .NET 3.5, which lacks support for KSP. Check the certificate:


certutil -p password cert.pfx

If it says “Provider = Microsoft Software Key Storage Provider” then Biztalk will fail and complain about private key protection. Fix it with openssl:


openssl pkcs12 -in my-original-cert.pfx -out temp.pem
openssl pkcs12 -export -in temp.pem -out my-fixed-cert.pfx

Import my-fixed-cert.pfx to the personal certificate store (and if self-signed also import as CA key). Update Biztalk to use the updated certificate and hopefully the problem should be solved. If you are starting from scratch, specify the old provider instead:


New-SelfSignedCertificate -Provider "Microsoft Strong Cryptographic Provider" ...
Categories: Windows

Install .NET 3.5 on Windows Server 2016 for Biztalk

Why would Windows Server 2016 need .NET 3.5? Well, ask Microsoft as it is needed for Biztalk 2016, a product that also requires an old SQL Server version (will not install on 2017) with Windows authentication. Anyway, put that aside. We need it and it should be a simple thing to add the feature. Unfortunately it is not. The wizard complains that it can’t find the files.

After some digging I managed to get it to work. First of all, open the group policy editor (gpedit). Navigate to Local Computer Policy -> Computer Configuration -> Administrative templates -> System -> Specify settings for optional component installation and component repair. Change the option to Enabled with “Download repair content…” on. Exit the application and run in a command prompt as administrator:


dism /online /enable-feature /featurename:NetFX3 /all /LimitAccess

Hopefully that should do the job. Phew!

Categories: Windows

Azure AKS default VM size cannot be changed

Kubernetes is pushed by all the major cloud provider and Microsoft recently (well, end of 2017) rolled out managed Kubernetes, AKS. It is a great offering, but there are problems. This is to be expected from a product that is still in preview, but take care and expect a somewhat bumpy ride if you take the leap!

Create a new cluster, for example with:


az aks create --resource-group rgakstwe \
 --name akstwe \
 --node-count 2 \
 --ssh-key-value ~/.ssh/id_rsa.pub \
 --location westeurope

This creates two nodes with the default AKS VM size, at this point Standard_D1_v2. All is well, but what happens when you want to add persistent volumes? Unfortunately premium storage disks are available only for some VMs. In particular they are NOT available for Standard_D1_v2, so while it is possible to create a persistent volume claim it is not possible to run a deployment that uses it.

What to do? There is no az aks update command for changing the VM size. According to Microsoft the only way forward is to manually change the size for each individual VM. That works, but there is a problem: the default size will be used when the cluster is scaled, so if it scales down and then up again the manually tweaked nodes revert to the default VM size.

At present that is the way it is. When AKS exits preview and is rolled out for real there will be a way to change the default VM size, but for now it is important to think ahead as the only way to change the default which is used for scaling is to recreate the cluster.

In other words, plan ahead and don’t start too small.

Categories: Kubernetes

Built-in timestamp for row changes in Oracle

I tend to include create and last change timestamps in most tables when I build something, but how can you see when a record was changed if they are missing or cannot be trusted (maintained by application code rather than triggers and someone has updated the database directly)? It turns out that Oracle has a nifty feature for this, the ora_rowscn pseudo-column. It reports the system change number for a row or block and that can be converted into a timestamp. For example:


create table test_row_scn (
  t_id number,
  constraint pk_test_row_scn primary key (t_id)
) rowdependencies;
/
insert into test_row_scn values (1);
commit;

select scn_to_timestamp(ora_rowscn) from test_row_scn where t_id = 1;

The rowdependencies option makes the table track changes for each row. That costs 6 bytes per row. Without it the query still works, but it returns the system change number for the block rather than the individual row and that may be a bit misleading.

Categories: Oracle