Performing LDAP Binds With NTLM

With either a Windows Active Directory server or a Samba4 Active Directory server very little information is exposed for anonymous searches via LDAP.  This means every search has to bind.  Simple binds are insecure and in many cases not supported - for good reason, they are hopelessly insecure.  But often times one doesn't want to bother with Kerberos (even as great as Kerberos is) due to ticket expiration, keytab paths, etc...
One alternative that works well is to bind using NTLM.  All modern distributions should have the SASL NTLM available, although perhaps not installed by default.  To specify a bind mechanism use the "-Y" option, and if your username on the local machine differs from your domain username specify your username with "-U" upper-case.  Specification of SASL bind paramters is entirely separate from the specification of the bind parameters for SIMPLE binds (with use "-D", etc...).

ldapsearch  -Y ntlm -U fred -h dc1,example.com -b 'dc=example,dc=com' uid=fred
SASL/NTLM authentication started
Please enter your password: *****************

Text 1: An NTML bind with ldapsearch, as user "fred"
 If you see a message like:
ldap_sasl_interactive_bind_s: Unknown authentication method (-6)
    additional info: SASL(-4): no mechanism available: No worthy mechs found
 - then your SASL installation does not support NTLM.  Most likely you do not have the correct package installed.  On openSUSE the required package is cyrus-sasl-ntlm.  Distributions tend to package support for each SASL mechanism separately.
It should be noted that NTLM is not a perfect, or perfectly secure, authorization solution; but it fills a gap in cases where Kerberos isn't applicable.  It is certainly superior to SIMPLE binding, at least because cumbersome LDAP DNs are not involved.


Deindexing A Table (PostgreSQL)

One of the key steps for performing tuning a database is to test indexing strategies and their effect on performance.  It is a series of 1. Drop the indexes, 2. Create new indexes, 3. Update statistics, and then 4.)EXPLAIN the important queries.  Rinse and repeat.
The most annoying step in testing different sets of index is "1. Drop the indexes"  - as you need to know the name of every index to drop.  This generally means there is a large amount of tedious and RSI inducing cut-n-paste involved in the process.  That amount of cut-n-paste also tends to rapidly become error prone.
There has got to be some way to expedite that process, right?  After all, the database engine knows the indexes that exist on a table.
Yep!  At least for PostgreSQL a stored procedure can be constructed that drops the indexes defined on a named table.
CREATE OR REPLACE FUNCTION deindex_table(table_name VARCHAR(255))
      objname character varying;


      FOR objname IN SELECT ci.relname 
                       FROM pg_index i,pg_class ci,pg_class ct 
                      WHERE i.indexrelid=ci.oid AND i.indrelid=ct.oid 
                        AND i.indisprimary = false AND
       EXECUTE 'DROP INDEX ' || objname ;
      END LOOP;
  COST 100;
With that stored procedure in place preparing a table to a new set of indexes is as simple as:
SELECT deindex_table('job');
One caveat is that indexes automatically created by the engine to enforce constraints cannot be dropped; in particular this procedure skips any index involved with enforcing the tables primary key - but you almost certainly don't intend to change the primary key, so that may be a feature rather than a bug.

Note: Due to PostgreSQL restrictions it is not possible to execute a VACUUM or and ANALYZE statement from within a stored procedure.