New in Apache HTTP Server 2.4 – Authorization, FCGI Proxy, and Mod_SSL

New Authorization Containers

The authorization container directives <RequireAll>, <RequireAny> and <RequireNone> may be combined with each other and with the Require directive to express complex authorization logic.

The example below expresses the following authorization logic. In order to access the resource, the user must either be the superadmin user, or belong to both the admins group and the Administrators LDAP group and either belong to the sales group or have the LDAP dept attribute sales. Furthermore, in order to access the resource, the user must not belong to either the temps group or the LDAP group Temporary Employees.

<Directory /www/mydocs>



Require user superadmin

Require group admins
Require ldap-group cn=Administrators,o=Airius

Require group sales
Require ldap-attribute dept="sales"




Require group temps
Require ldap-group cn=Temporary Employees,o=Airius




This is gonna be BIG! You can read the whole story at

Core Enhancements

KeepAliveTimeout in milliseconds
It is now possible to specify KeepAliveTimeout in milliseconds.
Simple MPM
Cleanroom MPM implementation with advanced thread pool management
Loadable MPMs
Multiple MPMs can now be built as loadable modules at compile time. The MPM of choice can be configured at run time.

Module Enhancements

mod_ssl can now be configured to use an OCSP server to check the validation status of a client certificate. The default responder is configurable, along with the decision on whether to prefer the responder designated in the client certificate itself.
mod_ssl now also supports OCSP stapling, where the server pro-actively obtains an OCSP verification of its certificate and transmits that to the client during the handshake.
mod_ssl can now be configured to share SSL Session data between servers through memcached
Embeds the Lua language into httpd, for configuration and small business logic functions.
FastCGI Protocol backend for mod_proxy

Program Enhancements

fcgistarter – FastCGI deamon starter utility

Module Developer Changes

Check Configuration Hook Added
A new hook, check_config, has been added which runs between the pre_config and open_logs hooks. It also runs before the test_config hook when the -t option is passed to httpd. The check_config hook allows modules to review interdependent configuration directive values and adjust them while messages can still be logged to the console. The user can thus be alerted to misconfiguration problems before the core open_logs hook function redirects console output to the error log.
Expression Parser Added
We now have a general-purpose expression parser, whose API is exposed in ap_expr.h. This is adapted from the expression parser previously implemented in mod_include.
Authorization Logic Containers
Advanced authorization logic may now be specified using the Require directive and the related container directives, such as <RequireAll>, all provided by the mod_authz_core module.
Small-Object Caching Interface
The ap_socache.h header exposes a provider-based interface for caching small data objects, based on the previous implementation of the mod_ssl session cache. Providers using a shared-memory cyclic buffer, disk-based dbm files, and a memcache distributed cache are currently supported.

Full List of Security / Code Changes

                                                         -*- coding: utf-8 -*-
Changes with Apache 2.2.15

  *) SECURITY: CVE-2009-3555 (
     mod_ssl: Comprehensive fix of the TLS renegotiation prefix injection
     attack when compiled against OpenSSL version 0.9.8m or later. Introduces
     the 'SSLInsecureRenegotiation' directive to reopen this vulnerability
     and offer unsafe legacy renegotiation with clients which do not yet
     support the new secure renegotiation protocol, RFC 5746.
     [Joe Orton, and with thanks to the OpenSSL Team]

  *) SECURITY: CVE-2009-3555 (
     mod_ssl: A partial fix for the TLS renegotiation prefix injection attack
     by rejecting any client-initiated renegotiations. Forcibly disable
     keepalive for the connection if there is any buffered data readable. Any
     configuration which requires renegotiation for per-directory/location
     access control is still vulnerable, unless using OpenSSL >= 0.9.8l.
     [Joe Orton, Ruediger Pluem, Hartmut Keil ]

  *) SECURITY: CVE-2010-0408 (
     mod_proxy_ajp: Respond with HTTP_BAD_REQUEST when the body is not sent
     when request headers indicate a request body is incoming; not a case of

  *) SECURITY: CVE-2010-0425 (
     mod_isapi: Do not unload an isapi .dll module until the request
     processing is completed, avoiding orphaned callback pointers.
     [Brett Gervasoni , Jeff Trawick]

  *) SECURITY: CVE-2010-0434 (
     Ensure each subrequest has a shallow copy of headers_in so that the
     parent request headers are not corrupted.  Elimiates a problematic
     optimization in the case of no request body.  PR 48359
     [Jake Scott, William Rowe, Ruediger Pluem]

  *) mod_reqtimeout: New module to set timeouts and minimum data rates for
     receiving requests from the client. [Stefan Fritsch]

  *) mod_proxy_ajp: Really regard the operation a success, when the client
     aborted the connection. In addition adjust the log message if the client
     aborted the connection. [Ruediger Pluem]

  *) mod_negotiation: Preserve query string over multiviews negotiation.
     This buglet was fixed for type maps in 2.2.6, but the same issue
     affected multiviews and was overlooked.
     PR 33112 [Joergen Thomsen ]

  *) mod_cache: Introduce the thundering herd lock, a mechanism to keep
     the flood of requests at bay that strike a backend webserver as
     a cached entity goes stale. [Graham Leggett]

  *) mod_proxy_http: Make sure that when an ErrorDocument is served
     from a reverse proxied URL, that the subrequest respects the status
     of the original request. This brings the behaviour of proxy_handler
     in line with default_handler. PR 47106. [Graham Leggett]

  *) mod_log_config: Add the R option to log the handler used within the
     request. [Christian Folini ]

  *) mod_include: Allow fine control over the removal of Last-Modified and
     ETag headers within the INCLUDES filter, making it possible to cache
     responses if desired. Fix the default value of the SSIAccessEnable
     directive. [Graham Leggett]

  *) mod_ssl: Fix a potential I/O hang if a long list of trusted CAs
     is configured for client cert auth. PR 46952.  [Joe Orton]

  *) core: Fix potential memory leaks by making sure to not destroy
     bucket brigades that have been created by earlier filters.
     [Stefan Fritsch]

  *) mod_authnz_ldap: Add AuthLDAPBindAuthoritative to allow Authentication to
     try other providers in the case of an LDAP bind failure.
     PR 46608 [Justin Erenkrantz, Joe Schaefer, Tony Stevenson]

  *) mod_proxy, mod_proxy_http: Support remote https proxies
     by using HTTP CONNECT.
     PR 19188.  [Philippe Dutrueux , Rainer Jung]

  *) worker: Don't report server has reached MaxClients until it has.
     Add message when server gets within MinSpareThreads of MaxClients.
     PR 46996.  [Dan Poirier]

  *) mod_ssl: When extracting certificate subject/issuer names to the
     SSL_*_DN_* variables, handle RDNs with duplicate tags by
     exporting multiple varialables with an "_n" integer suffix.
     PR 45875.  [Joe Orton, Peter Sylvester ]

  *) mod_authnz_ldap: Failures to map a username to a DN, or to check a user
     password now result in an informational level log entry instead of
     warning level.  [Eric Covener]

  *) core: Preserve Port information over internal redirects
     PR 35999 [Jonas Ringh ]

  *) mod_filter: fix FilterProvider matching where "dispatch" string
     doesn't exist.
     PR 48054 []

  *) Build: fix --with-module to work as documented
     PR 43881 [Gez Saunders ]

  *) mod_mime: Make RemoveType override the info from TypesConfig.
     PR 38330. [Stefan Fritsch]

  *) mod_proxy: unable to connect to a backend is SERVICE_UNAVAILABLE,
     rather than BAD_GATEWAY or (especially) NOT_FOUND.
     PR 46971 [evanc]

  *) mod_charset_lite: Honor 'CharsetOptions NoImplicitAdd'.
     [Eric Covener]

  *) mod_ldap: If LDAPSharedCacheSize is too small, try harder to purge
     some cache entries and log a warning. Also increase the default
     LDAPSharedCacheSize to 500000. This is a more realistic size suitable
     for the default values of 1024 for LdapCacheEntries/LdapOpCacheEntries.
     PR 46749. [Stefan Fritsch]

  *) mod_disk_cache, mod_mem_cache: don't cache incomplete responses,
     per RFC 2616, 13.8.  PR15866.  [Dan Poirier]

  *) mod_rewrite: Make sure that a hostname:port isn't fully qualified if
     the request is a CONNECT request. PR 47928
     [Bill Zajac ]

  *) mod_cache: correctly consider s-maxage in cacheability
     decisions.  [Dan Poirier]

  *) core: Return APR_EOF if request body is shorter than the length announced
     by the client. PR 33098 [ Stefan Fritsch ]

  *) mod_rewrite: Add scgi scheme detection.  [André Malo]

  *) mod_mime: Detect invalid use of MultiviewsMatch inside Location and
     LocationMatch sections.  PR 47754.  [Dan Poirier]

  *) ab, mod_ssl: Restore compatibility with OpenSSL < 0.9.7g.
     [Guenter Knauf]

PHP Caching and Acceleration with XCache

Anyone who runs a dedicated server for web hosting will tell you that a great way to decrease the load on your server and decrease the page load time is to use a PHP Cache such as APC or eAccelerator. While the largest noticeable improvements are for those site that receive a lot of traffic or are under heavy load, any site, large or small can see benefit from a PHP cache. That said, in addition to the two caches mentioned above, a new player has recently entered the market: XCache.

I first started using APC about 2 years ago when the load on one of my servers was high enough that it was affecting load times and was costing me user traffic. I chose APC over eAccelerator because it was a bit easier to install (at the time) and because APC had a reputation for being a bit faster than eAccelerator. Shortly there after I noticed my httpd processes segfaulting and a bit of research also showed that APC had a bit of a record for instability under heavy load. With that in mind, I took the slight performance hit and installed eAccelerator (which is still way faster than using nothing at all).

Up until today, I was still using eAccelerator on all of my servers. However, a post on the forums prompted me to give XCache, the new PHP accelerator from the maker of lighttpd, a try. I’ve got to say, while I’ve only been using it for about 6 hours at this point, it blows eAccelerator out of the water, especially once you enable multiple caches (which benefits SMP systems).


If you’re interested in some benchmarks of XCache, eAccelerator, APC, etc. then checkout the Five Opcode Cache Comparison on PHP on Fire.


Read more

Accessing a HostGator SVN repository via SVN+SSH on Windows

Accessing a HostGator SVN repository via SVN+SSH on Windows

This information should be helpful to anyone trying to access an svn repository stored on a remote (shared) server which does not expose an svn server.

My host is HostGator (good speeds, reliable ssh, cgi-only, MyISAM-only, decent support, non-existent knowledgebase). HostGator runs SSH over port 2222 which presents a few problems when trying to use traditional methods to connect to an SVN repository via SSH.

For these steps you will need Putty. Just get the whole suite.

X-Content-Type-Options: nosniff header

Over the past two months, we’ve received significant community feedback that using a new attribute on the Content-Type header would create a deployment headache for server operators. To that end, we have converted this option into a full-fledged HTTP response header.  Sending the new X-Content-Type-Options response header with the value nosniff will prevent Internet Explorer from MIME-sniffing a response away from the declared content-type.

For example, given the following HTTP-response:

HTTP/1.1 200 OK
Content-Length: 108
Date: Thu, 26 Jun 2008 22:06:28 GMT
Content-Type: text/plain;
X-Content-Type-Options: nosniff

<body bgcolor=”#AA0000″>
This page renders as HTML source code (text) in IE8.

Browsers sniff mime types of HTTP responses, initially because page authors frequently don’t get them right* and now because browsers have done it historically.

The worst instance related to mime sniffing is an old IE bug. As I understand it their sniffer tried some image formats and then HTML; then when they added PNG sniffing it was added to the sniff list after HTML, either by mistake or to maintain compatibility with pages that were currently being sniffed as HTML. The result of this is that even valid PNG images can be sniffed as HTML, converting a user-uploadable image into a Javascript (XSS) vector. The Chromium mime sniffer‘s comments (which are quite readable, and tabulate various browsers’ behaviors) describe this as a “dangerous mime type”.

But there are plenty of other ways that sniffing can screw you as a site author. Your only defenses if you’re building a site are:

  • either make sure user-uploaded images are on a different origin than your site’s cookies;
  • or set the Content-disposition: attachment header, preventing people from displaying the image in their browser.

I believe this bug is why you cannot view images attached to gmail messages — if you click “view image” in gmail you instead get an HTML page with an <img> tag, and if you right-click on that image and pick “view image” you’ll get it served with the attachment header.

To solve this mess, IE introduced the X-Content-Type-Options: nosniff header, which means “don’t sniff the mime type”. It looks like a reasonable workaround to me: it lets new pages opt into sane behavior without breaking old ones. Chromium added support for it.

It sounded good to developers of a Google-internal HTTP server as well; they added it by default to all responses. And then the bug reports started coming in: “Why does my page render in all browsers but Chromium?” It turned out many of these sites were sending no Content-type header, which, when coupled with the nosniff header, meant Chromium would pick the default of application/octet-stream, triggering a download box.

The fix is to match IE (r8559) for this corner case, which is to instead default to text/plain; I made wisecracks about adding an X-Content-Type-Options-Options: no-really-none-of-these-mime-shenanigans header. Adam (master of content-type sniffing, and I believe editor of the HTML5 sniffing spec) also wrote r8257. This collects stats (aggregated anonymized and only from users who have opted in) on what fraction of pages that we normally would’ve sniffed but were instead blocked by the header.

* In fairness, the greater problem is that page authors sometimes don’t control HTTP headers. They’re frequently defined by server configuration, which often requires root on the server or at least a lot more technical know-how than “click on the upload button in your website creator program”

Proxy Authentication with Squid

How does Proxy Authentication work in Squid?

Users will be authenticated if squid is configured to use proxy_auth ACLs.

Browsers send the user’s authentication credentials in the Authorization request header.

If Squid gets a request and the http_access rule list gets to a proxy_auth ACL, Squid looks for the Authorization header. If the header is present, Squid decodes it and extracts a username and password.

If the header is missing, Squid returns an HTTP reply with status 407 (Proxy Authentication Required). The user agent (browser) receives the 407 reply and then prompts the user to enter a name and password. The name and password are encoded, and sent in the Authorization header for subsequent requests to the proxy. Also see this example Authorization Header from .htaccess files.

NOTE: The name and password are encoded using “base64″ (See section 11.1 of RFC 2616). However, base64 is a binary-to-text encoding only, it does NOT encrypt the information it encodes. This means that the username and password are essentially “cleartext” between the browser and the proxy. Therefore, you probably should not use the same username and password that you would use for your account login.

Authentication is actually performed outside of main Squid process. When Squid starts, it spawns a number of authentication subprocesses. These processes read usernames and passwords on stdin, and reply with “OK” or “ERR” on stdout. This technique allows you to use a number of different authentication protocols (named “schemes” in this context). When multiple authentication schemes are offered by the server (Squid in this case), it is up to the User-Agent to choose one and authenticate using it. By RFC it should choose the safest one it can handle; in practice usually Microsoft Internet Explorer chooses the first one it’s been offered that it can handle, and Mozilla browsers are bug-compatible with the Microsoft system in this field.

The Squid source code comes with a few authentication backends (“helpers“) for Basic authentication. These include:

  • LDAP: Uses the Lightweight Directory Access Protocol
  • NCSA: Uses an NCSA-style username and password file.
  • MSNT: Uses a Windows NT authentication domain.
  • PAM: Uses the Unix Pluggable Authentication Modules scheme.
  • SMB: Uses a SMB server like Windows NT or Samba.
  • getpwam: Uses the old-fashioned Unix password file.
  • SASL: Uses SALS libraries.
  • mswin_sspi: Windows native authenticator
  • YP: Uses the NIS database

In addition Squid also supports the NTLM, Negotiate and Digest authentication schemes which provide more secure authentication methods, in that where the password is not exchanged in plain text over the wire. Each scheme have their own set of helpers and auth_param settings. Notice that helpers for different authentication schemes use different protocols to talk with squid, so they can’t be mixed.

For information on how to set up NTLM authentication see NTLM config examples.

In order to authenticate users, you need to compile and install one of the supplied authentication modules found in the helpers/basic_auth/ directory, one of the others, or supply your own.

You tell Squid which authentication program to use with the auth_param option in squid.conf. You specify the name of the program, plus any command line options if necessary. For example:

auth_param basic program /usr/local/squid/bin/ncsa_auth /usr/local/squid/etc/passwd

How do I use authentication in access controls?

Make sure that your authentication program is installed and working correctly. You can test it by hand.

Add some proxy_auth ACL entries to your squid configuration. For example:

acl foo proxy_auth REQUIRED
http_access allow foo
http_access deny all

The REQUIRED term means that any authenticated user will match the ACL named foo.

Squid allows you to provide fine-grained controls by specifying individual user names. For example:

acl foo proxy_auth REQUIRED
acl bar proxy_auth lisa sarah frank joe
acl daytime time 08:00-17:00
http_access allow bar
http_access allow foo daytime
http_access deny all

In this example, users named lisa, sarah, joe, and frank are allowed to use the proxy at all times. Other users are allowed only during daytime hours.

How do I ask for authentication of an already authenticated user?

If a user is authenticated at the proxy you cannot “log out” and re-authenticate. The user usually has to close and re-open the browser windows to be able to re-login at the proxy. A simple configuration will probably look like this:

acl my_auth proxy_auth REQUIRED
http_access allow my_auth
http_access deny all

But there is a trick which can force the user to authenticate with a different account in certain situations. This happens if you deny access with an authentication related ACL last in the http_access deny statement. Example configuration:

acl my_auth proxy_auth REQUIRED
acl google_users proxyauth user1 user2 user3
acl google dstdomain
http_access deny google !google_users
http_access allow my_auth
http_access deny all

In this case if the user requests then first second http_access line matches and triggers re-authentication unless the user is one of the listed users. Remember: it’s always the last ACL on a http_access line that “matches”. If the matching ACL deals with authentication a re-authentication is triggered. If you didn’t want that you would need to switch the order of ACLs so that you get http_access deny !google_users google.

You might also run into an authentication loop if you are not careful. Assume that you use LDAP group lookups and want to deny access based on an LDAP group (e.g. only members of a certain LDAP group are allowed to reach certain web sites). In this case you may trigger re-authentication although you don’t intend to. This config is likely wrong for you:

acl ldapgroup-allowed external LDAP_group PROXY_ALLOWED

http_access deny !ldapgroup-allowed
http_access allow all

The second http_access line would force the user to re-authenticate time and again if he/she is not member of the PROXY_ALLOWED group. This is perhaps not what you want. You rather wanted to deny access to non-members. So you need to rewrite this http_access line so that an ACL matches that has nothing to do with authentication. This is the correct example:

acl ldapgroup-allowed external LDAP_group PROXY_ALLOWED

http_access deny !ldapgroup-allowed all
http_access allow all

This way the http_access line still matches. But it’s the all ACL which is now last in the line. Since all is a static ACL (that always matches) and has nothing to do with authentication you will find that the access is just denied.

More Info

Example .htaccess

Send Custom Headers

Header set P3P "policyref=\"\""
Header set X-Pingback ""
Header set Content-Language "en-US"
Header set Vary "Accept-Encoding"

Blocking based on User-Agent Header

SetEnvIfNoCase ^User-Agent$ .*(craftbot|download|extract|stripper|sucker|ninja|clshttp|webspider|leacher|collector|grabber|webpictures) HTTP_SAFE_BADBOT
SetEnvIfNoCase ^User-Agent$ .*(libwww-perl|aesop_com_spiderman) HTTP_SAFE_BADBOT
Deny from env=HTTP_SAFE_BADBOT

proxy_auth acl causing challenge loop
> Well, I really prefer the old behaviour, so I hope the behaviour is not
> hardcoded, but configurable.

It’s not hardcoded, instead it is dependent on how your http_access rules
are constructed.

Squid prompts for login credentials if the user is denied access by an
authentication related acl (proxy_auth, proxyauth_regex, external using

http_access deny someacl authacl
prompts for new credentials if matched (denied by authacl)
http_access deny authacl someacl
does nor prompt for new credentials (denied by someacl)

Further Resources

  1. smb.conf man page
  2. smbclient man page
  3. ntlm_auth man page
  4. Configuring Squid Proxy To Authenticate With Active Directory
  5. Samba & Active Directory
  6. The Linux-PAM System Administrators’ Guide

Original Source: ProxyAuthentication © Creative Commons Attribution Sharealike 2.5 License

The Camping Server for Apache + FastCGI

  1. Install Apache 2.
  2. Install mod_fastcgi.
  3. Add to Apache’s httpd.conf:
     AddHandler fastcgi-script rb
     ScriptAlias / /usr/local/www/data/dispatch.rb/
  4. In dispatch.rb:
     require 'rubygems'
     require 'camping/fastcgi'
     Camping::Models::Base.establish_connection :adapter => 'sqlite3',
       :database => "/tmp/camping.db"

Serving One File

The above setup will serve a whole directory, just like TheCampingServer. If you only want to serve one app (at the root) change the last line in dispatch.rb to point to a single file.


Mounting at a Subdirectory

You can certainly use ScriptAlias to attach the Camping app to a subdirectory, rather than root. If you are using URL() and R() in your code, the paths will change accordingly.

 ScriptAlias /myapp /usr/local/www/data/dispatch.rb/

FastCGI .htaccess

This is a basic FastCGI .htaccess file. The last line is the most important.

AddHandler fastcgi-script .fcgi 

Options +FollowSymLinks +ExecCGI  

RewriteEngine On  
RewriteRule ^$ index.html [QSA] 
RewriteRule ^([^.]+)$ $1.html [QSA] 
RewriteCond %{REQUEST_FILENAME} !-f 
RewriteRule ^(.*)$ dispatch.fcgi/$1 [QSA,L]


* Make sure your dispatch.fcgi is marked as executable! Run “chmod 755 dispatch.fcgi” if you’re not sure. * The second part of GEM_PATH should be your host’s installed gems location, the example below is taken from Dreamhost.


ENV['GEM_PATH'] = '/path/to/my/gems:/usr/lib/ruby/gems/1.8'
ENV['GEM_HOME'] = '/path/to/my/gems'

Dir.chdir '/path/to/my_app'

require 'my_app'

class ApacheFixer
  def initialize(app); @app = app; end

  def call(env)
    env['SCRIPT_NAME'] = '/'
    env['PATH_INFO'] = env['REQUEST_URI'][0..(env["REQUEST_URI"].index("?")||0)-1]

Using CGI

If you’re having issues with FastCGI, try to get it working with CGI first. To do this, change the examples above:

  • In dispatch.fcgi, change “Rack::Handler::FastCGI” to “Rack::Handler::CGI”.
  • Rename dispatch.fcgi to dispatch.cgi.
  • Update the last line of .htaccess to point to dispatch.cgi instead of dispatch.fcgi.

Notes for Dreamhost

  • If you’re having trouble with timeouts, try getting this to work for CGI first. If CGI works, then FastCGI should work, and Dreamhost is just being stupid. Change it back to use FastCGI, and come back later. This worked for me a couple times, and I place the blame on Dreamhost.
  • Set up your own gem path that you can install to and edit manually.
  • If you followed the Dreamhost guide to making your own gem path, your gem path would be /home/username/.gems.
  • If you’re trying to install gems remotely, Dreamhost will probably kill the process before it finishes. For me, using the ‘nice’ command didn’t help. Get the gem files, scp them to your server, and install them locally (i.e. “gem install activesupport-2.1.0.gem”). This means installing dependencies in turn (activesupport, markaby, and metaid before camping).

Setup Zope behind Apache with SSL

Accessing CGI environment variables created by mod_ssl from within Plone

This way you will get HTTP_SSL_CLIENT_VERIFY, HTTP_SSL_CLIENT_S_DN_CN and HTTP_SSL_CLIENT_S_DN_Email environment variables in the request object.

Posted by mustapha


You need to setup Zope behind Apache with SSL and you need to access some/all of the CGI environment variables set by the mod_ssl from within Plone. How to do it ?

To setup Zope behind Apache with SSL is not the hard part. I’ll give anyway an example of setting an apache virtualhost with SSL.

Apache doesn’t forward the mod_ssl CGI environement variables to Zope. Why ? Because Zope doesn’t support SSL until now.

When you setup apache with SSL as proxy for your Plone site, it (apache) receives HTTPS-requests from the outside but it sends HTTP-requests to Zope. That’s why you don’t get the SSL headers through to the proxied Plone site.


How to generate your certificate authority, the server certificate and a client certificate to test the setup is out of the scope of this post. Here are 2 links where you can get help for that. Just copy/past the commands if you don’t understand. You will finish with getting all certificates:

Apache VirtualHost:

Here is an example of setting a VirtualHost with SSL:

<VirtualHost *:443>
  <LocationMatch "^[^/]">
      Deny from all

  SSLEngine on
  SSLProtocol all -SSLv2
  SSLCertificateFile       /etc/apache2/conf.d/server.cert
  SSLCertificateKeyFile    /etc/apache2/conf.d/server.key
  SSLCertificateChainFile  /etc/apache2/conf.d/authority.crt
  SSLCACertificateFile     /etc/apache2/conf.d/authority.crt

  SSLVerifyClient optional
  SSLVerifyDepth 1
  SSLOptions +stdEnvVars

  SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown 

  RewriteEngine on
  RewriteRule ^/(.*)$1 [P,L]

The most important line related to our problem is the line in red. This mod_ssl directive creates the standard set of SSL related CGI/SSI environment variables. Now, how to forward these variables over HTTP to Zope.

Forwarding the SSL variables:

1. The mod_headers way:

The easiest, not flexible and not secure way is to use mod_headers directives.
Be sure that mod_headers is installed and you have something like this line in your httpd.conf file:

LoadModule headers_module /usr/lib/apache2/modules/

Now, just forward all the variables you need:

<VirtualHost *:443>
 <LocationMatch "^[^/]">
       Deny from all

  SSLEngine on
  SSLProtocol all -SSLv2
  SSLCertificateFile       /etc/apache2/conf.d/server.cert
  SSLCertificateKeyFile    /etc/apache2/conf.d/server.key
  SSLCertificateChainFile  /etc/apache2/conf.d/authority.crt
  SSLCACertificateFile     /etc/apache2/conf.d/authority.crt
  SSLVerifyClient optional
  SSLVerifyDepth 1
  SSLOptions +stdEnvVars

  SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown 
  RequestHeader set SSL_CLIENT_S_DN_CN %{SSL_CLIENT_S_DN_CN}e
  RequestHeader set SSL_CLIENT_S_DN_Email %{SSL_CLIENT_S_DN_Email}e

  RewriteEngine on
  RewriteRule ^/(.*)$1 [P,L]


Generate and User your own SSL Key in Apache

Do It Yourself SSL Guide

By Stephen Philbin

There are many people who want or need to have the connection between the browser and the Web server encrypted, but haven’t been able to set it up. This guide is intended to help people with the typical Apache on Linux setup to make encrypted connections available with a minimum of fuss, and if the encrypted connection isn’t for a commercial purpose, to do so without spending a penny.


Sometimes hosting providers block the user from setting it up because the user needs to upgrade (pay more money for) the hosting account. Another possiblity is that the hosting provider doesn’t want users to have any hands-on control regardless of which hosting package you have with them. If you have a package that allows full root access or something similar, you’re unlikely to have any problems, however, it’s not always necessary to have full access as root to be able to set it up. In this article are alternatives to the hands-on approach you would use when logged in as root, but the best I can offer are general pointers. This is because most hosting providers offer some sort of control panel for administrative tasks, but this access can vary widely from one hosting provider to another.

Key Generation

As some of you might already know, a certificate is needed to enable an encrypted connection. The connection can be encrypted using the Secure Sockets Layer (SSL) or the Transport Layer Security (TLS) mechanism, but you don’t need to worry about which will be used because that will be agreed upon by the browser and Apache. Before we can obtain a certificate file, we must first generate a key file.

To generate a key file whilst logged in as root via Secure Shell (SSH), you need to enter the following command (or a variant of it that’s tailored to your preferences.):

openssl genrsa -out keyfilename.pem 2048

I’ll explain each part of the command. If you’re forced to use a control panel provided by your hosting provider, you’ll have a decent idea of what to look for and what to do. Some control panels hide this step from the user and combine this key generation step with the certificate generation step. If you can’t find any option for generating a key, but have an option for generating certificates, don’t panic. If you logged in as root, issued this command over SSH and got a message back saying something like: bash: openssl: command not found, then the OpenSSL program isn’t installed and you either need to install it yourself or have your hosting provider install it for you.

The first part of the command is the name of the program we’re using: OpenSSL. This could be something else to look for on a control panel if you can’t find keys or certificates. Either OpenSSL or perhaps just SSL. TLS might be another possibility, but an unlikely one.

Next up is the key type option. The two most popular types of keys are DSA and RSA. DSA keys are used for digital signatures and aren’t used for encryption; RSA keys can be used for both digital signatures and encryption. Here, we need to generate an RSA key and you should look out for this option if you’re using a control panel to generate the key.

The next two parts are actually a single instruction to OpenSSL. The -out parameter simply indicates that the following text indicates the location of where the file should be placed and what the name of the file should be. When issuing this command via SSH I recommend using an absolute pathname such as: /usr/local/apache2/apache_key.pem so you know exactly where it is once it’s been created. If you’re generating the key through a control panel, look for an option for specifying where it should be placed, or look for it telling you where it will be placed. Regardless of which method you use, make sure that it isn’t placed in a directory where Apache serves Web pages.

The last option of the command is the size if the key in bits. I use 2048 because it’s the recommended size based on current technology. You can increase the number to make it more secure if you prefer, but this means that you might take a performance hit when using SSL. A Certificate Authority (CA) might also require that you use a size specified by them, but you don’t need to worry about CA’s unless you’re intending to use SSL for commercial (or similar) purposes.

Another noteworthy option that’s not used in the command given above is the -des3 option. It’s used to add a protecting password to the key. This might sound like a good thing, but for the purposes of SSL in Apache, it’s not. If you were to use this option, then someone would have to somehow input this password every time an SSL connection is made. If you see an option for this in a control panel section for making keys, don’t use it.

Obtaining The Certificate

Depending on what you want to use SSL for, and whether or not you’re going to pay for your certificate, you’ll use one of two different methods to obtain your certificate. If you don’t want to pay for your certificate and you’re not bothered about a user’s browser presenting them with a warning that the certificate is untrustworthy until they tell their browser otherwise, you can create your own self-signed certificate. Such a certificate isn’t of much use for online transactions because your customers won’t have any confidence about your security, but it’s perfectly fine for personal use. If you want a certificate that your customers can use without warnings, you need to have a widely trusted CA sign your certificate for you. After you issue one of the commands to begin the process of obtaining either type of certificate, you also need to provide the information that will be contained in the certificate. I’ll explain each question that might be asked later on, but for right now, you’ll learn about the comands first.

Obtaining A Normal Certificate

Obtaining a certificate similar to those seen on most commercial sites (where they are automatically be trusted by browsers) requires two steps. The first step will be performed by you, but you’re not able to perform the second step. The CA (such as Verisign or GoDaddy) will perform the last step. The first step is to create what’s called a Certificate Signing Request (CSR). A CSR is a file that, once signed by a CA, will become your certificate. Here’s the command to create it:

openssl req -new -key keyfilename.pem -out certfilename.csr

Again, I’ll give you some information on each part of the command so you can translate this into actions in your control panel or just modify it if you want to change something, but it’s unlikely you’ll want to change anything other than the file names.

The first new command is req. This command is to indicate that we intend to use CSR management. If you’re using a control panel and you can’t find anything about a certificate (request) option, try looking for something like CSR (management) instead. The -new option is an obvious one. It simply means that we’re creating a new CSR rather than doing something to an existing one. The -key option specifies the location of the key file used with the certificate. You must alter this option to point to the location of your key file that you generated earlier in this guide. If you’re using a control panel you might be given a field in a form to specify the location of the key. If this is the case then do so. However, some popular control panels ask you to copy and paste the key into a text area. How you get the key into your clipboard for pasting in to the text area will depend on what can or cannot do with your host. In my experience the method most likely to be available is to copy the key to your computer, then open it in a text editor. You should use the most secure transfer method available to you, but if you’re having to do things this way your options are probably quite limited. Trying to open your key file on a Windows PC will almost certainly cause it to tell you that it doesn’t know what to do with the file. Instead, open the file with Notepad. I’ve opened the key in Notepad on Windows XP. The key was presented in text form, but it showed my test key over just two lines. Depending on your control panel, it might be OK to paste in the key as Notepad presents it, but you might have to make some changes after you paste it in to make it display correctly. The following is a demonstration of how a 2048 bit key would is often represented in text form:


As you can see, it’s a block of text with start and end markers on lines of their own at the beginning and end of the key. The main key text appears as 25 lines that, with the exception of the last line, are 64 characters long. A key that has fewer bits will have fewer lines and a key with more bits will have more lines, but the line length stays the same.

The final part of the command, -out, serves the same purpose here as it did with the command we used to generate the key. Follow the same guideline of giving an absolute pathname (if possible) so you know exactly where it’s going to be placed.

Once you have your CSR file you need to find a CA to hand it over to for the second step: signing it. After they’ve done whatever checks they deem necessary, they’ll then sign your CSR and give you your new certificate.

Obtaining A Free Certificate

If you want to create a certificate of your own without having to involve a CA, you can perform both steps by yourself. This means that the user’s browser will present them with a huge: This certificate is self-signed! warning, but if this doesn’t concern you, then it doesn’t matter. Self-signed certificates can be a cheap alternative to CA signed certificates when you’re testing things out and experimenting, or if you’re the only person that needs a secure connection to your host. They can also be good for allowing regular users to use secured connections if they know they can trust you and you warn them about the certificate warnings in advance.

Here, the process of creating the CSR and having it signed are merged into one so you don’t create the CSR file. Instead, you just generate the certificate file directly. The following is a command to generate a self-signed certificate:

openssl req -new -x509 -key keyfilename.pem -out certfilename.pem -days 365

As you can see, it’s similair to the other command for creating a CSR that you would have signed by a CA, but it has two more options than the previous one. The first of the extra options is the -x509 option. This is the option that tells OpenSSL to output a self-signed certificate instead of a CSR. If you’re using a control panel to create a self-signed certificate be sure to look for, and use, an x509 option. The second of the extra options is the -days option. This option simply specifies how long (in days) the certificate is valid. Once the number of days has passed, you should generate a new certificate file and dispose of the old one.

.htaccess tutorial

Table of Contents

- Using Dynamic Control Files
- Global Dynamic Configuration File
- Local Dynamic Configuration Files
- Macro Expansion
- Directive Ordering
- Section Directives
- Wildcards and Extended Regular Expressions
- Nesting Section Directives
- The Section Directives
- Directory
- Location
- Files
- Limits
- RemoteIP
- RemoteHost
- Header Directives
- Header
- AddType
- ForceType
- Redirect
- Directory Listing Control Directives
- IndexIgnore
- Error Document Control
- ErrorDocument
- Access Control Directives
- User and Domain Access Control
- Host Based Access Control
- Domain Names and IP Numbers
- Specifying Hosts
- The “order” Directive
- Creating Usable Policies
- Authentication Access Control
- How Authenticated Access Control works
- User Databases – includes a Win95 and NT version to download for free
- Group Files
- Authenticated Dynamic Control Directives
- AuthType BASIC
- Using Authentication and Access Control Together
- Restriction Directives
- Directory Directives
- Miscellaneous Directives

Using Dynamic Control Files

Dynamic control files allow you to change the behaviour of the server while it is running, by altering simple text files stored within the document tree. This allows server configuration to be tailored for each directory, and can be used to delegate server control to those responsible for the content.

Dynamic control files are plain text files which store the configuration details for segments of the document tree. The files contain server directives, followed by optional parameters. White space characters are used as separators and each directive should start a new line. Everything which follows a “#” character is regarded as a comment and is ignored by the server. Directives can be nested unless otherwise stated.

By default, dynamic configuration files use the name “.htaccess” although this is configurable.

Global Dynamic Configuration File

The global dynamic configuration file is treated as a special case. It can specify the settings for any file, directory or URL, associated with the server. The server will apply the configuration modifications defined in the global file to all virtual server requests. These settings can either be absolute or can be overridden by local dynamic configuration files, depending on the configuration. The global dynamic control file should reside in a location outside of the document root, the location can be set from the administration server.

Local Dynamic Configuration Files

Local dynamic configuration files can be placed in any directory within the document root. The server will apply the modifications defined in the file to all requests made to the directory and to any sub directory. This allows the behaviour of the server to be changed for whole segments of the document tree by editing a single file. On a request for the document /products/toys/index.html from a site with a docroot of /pages the server will search:


It should be noted that the effect of access files are not cumulative, so if access restrictions are relaxed in a protected sub directory, a client will be able to access information if they know the full URL. Be aware that by default all of the directories above the document root on the file-system are also searched. See the section on AllowOverride for more information on the search path and how to restrict it.

Macro Expansion

When an htaccess file is dynamically parsed, any instance of the following ‘%’ macro is replaced by its substitution:
Macro Substitution
%docroot% Server document root

Directive Ordering

The ordering of directives in an htaccess file can affect the functionality of the directives and can, under certain circumstances, cause directives to become voided. For example, two AuthUserFile directives will not combine to use two files. Rather the latter of them will override the former, resulting in only one of the files being used. It should also be noted that some directives are global, and therefore will not be affected by <Files> or <Directory> tags. Where you need (for example) to have different authentication settings for different directories, try placing the authentication directives in htaccess files in those directories.

Section Directives

Section directives allow the behaviour of the server to be changed based on external variables and for particular files. Section directives are used in pairs, similar in concept to HTML tags. Each directive has an opening tag, and a closing tag. Any configuration directives placed inside the tags are applied only if the section directive’s criteria are satisfied. The criteria for various section directives differ. A section directive can be negated by prefixing a “!” to the directive name.


    <Section_Directive criterion criterion criterion …>
    Configuration Directive

Example negated

    <!Section_Directive criterion criterion criterion …>
    Configuration Directive

Wildcards and Extended Regular Expressions

All the section directives except <limit> can make use of wildcards and regular expressions to define their criteria. Wildcards allow simple pattern matching while regular expressions allow more powerful and complex pattern matching. For more details of using wildcards and extended regular expressions see later sections.

Nesting Section Directives

You can include pairs of section directives within other sections. This allows even finer control over what documents the server sends where. Each section can be nested in any other section, but a section can not be nested in itself (after all it would be a little pointless). If you do nest sections directives you should be aware of the following.

The inner section directive will only be evaluated if the outer directive is evaluated as true. This might appear obvious, but should be considered when ordering directives. The directive with the least processing overhead and occurs most often should be placed first.

The Section Directives


The <Directory> directive allows the server behaviour to be set for individual directories on the server. The <Directory> directive can only be used in the global dynamic configuration file.

Syntax : <Directory pathspec>

The pathspec argument should be a valid directory name accessible by the server, a valid wild card string or regular expression.


The <Location> directive allows the server behaviour to be set for individual server URLs. The <Location> directive can only be used in the global dynamic configuration file.

Syntax : <Location pathspec>

The pathspec argument should be a valid URL for the server, specified without the “http://servername&#8221; or “:port”, but starting with a “/”, a valid wild card string, or a valid regular expression.


The <Files> directive allows the server behaviour to be set for individual files.

Syntax : <Files filespec>

The filespec argument should be a valid file name within the document tree. If a leading “/” character is not present the server will prefix the current directory to the filename. Alternatively a valid wild card string or valid regular expression.


The <limit> directive allows the server behaviour to be set for individual HTTP methods.

Syntax : <Limit method method method …>

The method argument should be a valid HTTP/1.1 method e.g.. GET or POST. It should be in capital letters. Note that “HEAD” requests will be limited in the same way as GETs: HEADs cannot be limited separately.


The <RemoteIP> directive allows the server behaviour to be set depending on the remote machine’s IP address.

Syntax : <RemoteIP ipspec>

The ipspec argument should be a valid IP address or subnet, a valid wildcard string or a valid regular expression.


The <RemoteHost> directive allows the server behaviour to be set depending on the remote machine’s host name..

Syntax : <RemoteHost hostspec>

The hostspec argument should be a valid host name or domain name, a valid wildcard string or a valid regular expression.

This example shows the structure of dynamic configuration files. Indenting the section directives is not required, but it makes the files easier to read and is therefore good practice.

    # comments go heredirective parameter
    directive parameter

    <limit GET POST>
    # more comments here perhaps
    directive parameter
    directive parameter

    <!files *.txt>
    directive parameter
    directive parameter
    directive parameter

    directive parameter
    directive parameter

Header Directives

Header directives allow HTTP header values to be tailored, modified or overwritten.


The Header directive allows the HTTP response headers returned by the server to be modified. Additional response headers can be inserted, existing response headers modified, deleted and extended.

Syntax : Header action name value
Parameters : action Set The header is set, replacing any previous header with this name

Append The header is appended on to the existing. The appended value is separated from the existing value by a comma.
Add The header is added, even if a header of the name already exists. Can result in multiple headers of the same name.
Unset The header value set in parent dynamic configuration files will be unset. No value parameter is required.
name The HTTP Header name to insert.
value Text Literal text value to be associated with the header.
NOW (-/+) offset Inserts the current server time plus or minus optional offset value in seconds.


    Header Append Author MyName
    Header set Expires Tue, 17 Jun 1997 18:27:52 GMT<Location /foo>
    # All /foo pages expire in 4 hours time
    Header set Expires NOW+14400


The AddType directive allows additional media types (a.k.a. MIME types) to be set. For the client to correctly handle the document returned by the server, it requires the server to place the correct media type in the HTTP headers. Add type will force any files with a given file extension to return the new media type.

Syntax : AddType mime-type extension
Parameters : mime-type new media type extension file extension to associate with the new media type


    AddType image/jpeg jpg
    AddType audio/x-wav wav
    AddType video/x-sgi-movie movie


The ForceType directive allows the media type to be set for every file returned. This is useful when you have a number of files without a file extension, under normal circumstances the server would not know which media type to associate with them.

ForceType will return the specified media type for every file, even ones which may have a valid file extension and associated mime-type.

Syntax : ForceType mime-type
Parameters : mime-type new media type


    #All files in this directory are jpegs
    ForceType image/jpeg# All files in specified directory are gifs
    <directory /usr/local/web/pictures/landscape/gifs>
    ForceType image/gif

**Please be aware, this will not work with PHP files.


The Redirect directive allows you to inform the client that the resource it has requested is no longer present at the requested location. This may be because it has moved or been deleted.

The directive supplies an URL to use in place of the redirected one. This allows changes in the document tree to be made, without leaving broken links or “404 not found” errors.

Syntax : Redirect [status] url-path[*] url
Parameters : status “permanent” 301 Moved Permanently The resource has moved permanently, client should always use the new URL

“temp” 302 Moved Temporarily The resource has moved temporarily, client should use the new URL for this request only.

“see other” 303 See Other The resource has been replaced, client should use the new URL, but still reference the old resource.

“gone” 410 Gone The resource has been permanently removed. The URL parameter should be omitted in this case.

url-path absolute path to requested resource

url absolute URL to replacement resource (if any)

When receiving a HTTP 301, 302 or 303 status code the client will automatically request the new URL usually without prompting the user. If the status parameter is omitted, the server will return 303 Moved Temporarily. For a full explanation of these codes refer to the full HTTP1.1 specification.


By default, any trailing URL info will be appended onto the destination string, however appending a ‘*’ onto the URL will force all requests beginning with that URL to redirect to exactly the destination, without the trailing URL info appended.

Directory Listing Control Directives

These directives allow fine grain control over the output of the directory listing module.


The IndexIgnore directive adds to the list of files to hide when listing a directory. File is a leafname, wildcard expression or full filename for files to ignore. Multiple IndexIgnore directives add to the list, rather than the replacing the list of ignored files. By default, the list contains ‘.’.


    IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti*

Error Document Control

These directives allow fine grain control over the error pages generated by the web server in the event of a problem or error. The customisable error module needs to be enabled for these settings to take effect.


The ErrorDocument directive allows you to redirect the client to a local or external URL on certain status codes.

Syntax : ErrorDocument error-code document
The document can begin with a slash (/) for local URLs, or will be a full URL which the client can resolve.


Access Control Directives

Access control allows you to restrict access to information on your web server. This is useful in a number of situations:

  • Portions of your Website may be for customers or internal use only.
  • Portions may require a subscription to be paid.
  • Portions may need to be approved by management before going public.

All these circumstances require some method for the server to validate the client. Zeus Server provides a number of access control methods, providing a balance between ease of administration and performance.

By embedding the access control information within the document structure, whole sections of the Website can be easily locked and unlocked by editing a single file. As most Websites already use directories as a means of organising information into logical hierarchies, dynamic configuration files are ideally suited to the task. By using dynamic control files the access integrity is also increased. If the document structure should change at any time, such as moving or renaming a directory, the access restrictions will still remain in place.

User and Domain Access Control

Access control can be based on two criteria, the user requesting the resource or the machine from which the user sends the request. This allows you to specify access restrictions which are based on person and on organisation. Host based access control can validate clients using IP number or domain name. Authenticated access control is based on a User ID and Password challenge. The two approaches can be used together to provide extra security.

Host Based Access Control

Host based access control enables pages to be protected by client location. The server will restrict pages based on either the IP numbers or DNS names listed in the dynamic configuration file. You may want some pages on your server to be accessed only by individuals within your organisation, or customer pages which shouldn’t be accessed by the general public. Host based authentication provides a reliable method of restricting access, while keeping administration simple, and the system easy to use.

If the server determines a client request should not be fulfilled, the page requested will not be sent and a “HTTP 403 Forbidden” error will be returned.

Domain Names and IP Numbers

Domain names provide the human readable machine address which we are used to seeing in URLs and email addresses. These names are usually allocated by organisation, so provide a simple means of identifying who is connecting to your Website. Domain names are specified in dynamic configuration files which are specified either absolutely by including the machine name, or sub domain by prefixing a “.” to the domain name.

IP numbers are the machine address which the DNS names are mapped onto. They are usually allocated as blocks or subnets as required, one organisation may have a number of IP subnet blocks. This can make restricting pages based on IP number a little more difficult than DNS name. IP numbers should be listed, as is Internet convention, by converting each of the four bytes to a decimal representation, and putting a “.” between each byte’s number and the next. IP subnets can be specified in one of three ways:

1.   A partial IP-address
For simple class A or B or C subnetting, specify the partial IP address, plus a trailing “.”, e.g. 10. to specify the class A network.
2.   A network/netmask pair
A.B.C.D/X.Y.W.Z where A.B.C.D is a network, and X.Y.W.Z is a netmask, e.g.
3.   A network/n CIDR specification
A.B.C.D/n where A.B.C.D is a network, and n is a number between 1 and 32 specifying the number of high-order 1 bits in the netmask. i.e. 10.0.0/8 is the same as

Alternatively IP numbers and DNS names can be specified using extended regular expressions. Regular expressions allow sophisticated pattern matching to occur in the dynamic configuration file, but need to be constructed carefully to avoid security holes.

See the Zeus document “Using Regular Expression In Dynamic Configuration Files” for more information.


    Absolute Names :
    Sub Domains :
    Absolute IP Numbers :
    IP Subnets : 194.33.68.
    IP network/netmask pair:
    IP network/CIDR representation:

It should be noted that any individual who uses a machine within the specified domain will be permitted to view the restricted pages. Additionally the DNS system has vulnerabilities which allow malicious but technically adept Internet users to “spoof” their DNS names, giving the impression they are within a different domain. Such instances are rare, but it may represent an unacceptable risk for some organisations In which case host based access control should use IP numbers or be augmented by user access control.

Specifying Hosts

Restrictions are specified using the “deny” , “allow” and “order” directives within the dynamic configuration file. The “deny” and “allow” directives take the string “from” followed by either a DNS name or IP address on which to authenticate the client, alternatively the “all” parameter can be used to describe all hosts on the network. You may specify multiple “deny” and “allow” lines within the same dynamic configuration file.


    deny from
    deny from
    allow from
    allow from .edu
    allow from all
    deny from
    deny from 192.241.244.

The “order” Directive

To avoid any ambiguity within the “deny” and “allow” lists, the “order” directive specifies which to process first. The order in which the lists are processed have a considerable effect on the restrictions defined. Order can take three values :

    deny,allow – Process the “deny” list followed by the “allow” list. Initial state is to allow all. Default behaviour.
    allow,deny – Process the “allow” list followed by the “deny” list. Initial state is to deny all.
    mutual-failure – Only allow if specifically listed in the “allow” list and not listed in the “deny” list.

Creating Usable Policies

By combining “deny” and “allow” lists sophisticated host authentication can be achieved. Example .htaccess files are listed below for a number of common Website requirements. For the purpose of these examples, the network is considered local.

    Deny all access, except internal machines ( subnet)order deny,allow
    deny from all
    allow from 10.0.0.

    Deny all access, except internal machines and our partner company.

    order deny,allow
    deny from all
    allow from 10.0.0.
    allow from

    Allow all access, except those of our rival.

    order allow,deny
    allow from all
    deny from

Authentication Access Control

Host based access control works well in most situations, particularly when the access to the information should not be made public, but is not sensitive or commercially valuable. Authenticated access control can offer a greater degree of security by requiring the client to supply a valid username and password before sending the information. It can also be more flexible, allowing authorised users to connect from any machine on the network. User and host based access control can be used together to provide the maximum security option. Authenticated access control has a number of additional administrative overheads, which make it a little more complicated than host based access control.

How Authenticated Access Control works

When a client tries to connect to a resource which is protected with Authenticated access control the server will return a HTTP Status code value “401 Unauthorised” to the client. The client should then display a dialogue box asking for a username and password. The resource is then requested again, this time the client will include the “Authorised” HTTP header which includes the username and password. The server then compares them against user lists and if both fields are valid the server will return the resource. If the username or password is incorrect, the server will again return the HTTP Status code value “401 Unauthorised” header, whereby the client can ask again for the login details.

To access resources which are protected with Authenticated access control the client must provide the login details for each request. If the client prompted for login details for every file it transferred the process would be slow, tedious and inconvenient. To solve this problem the client will send the “Authorised” HTTP header, for each subsequent request from the site.

Authenticated access control is configured in the same manner as host based access control. Directives are included in the dynamic configuration files which specify which users to allow and which resources to protect. These set of Authenticated dynamic configuration directives define a realm, which the server applies to the protected resources.

User Databases

The username and password information for Authenticated access control are stored as plain text files. For security reasons it is important that these files are not stored under the document root. The format for these files is similar to the standard UNIX /etc/passwd file.



The password is encrypted in the same manner used in /etc/passwd, enabling easy manipulation of the user files by third parties. Additional information may be stored in the file by appending an additional “:” to the password field. Any fields following the password entry will be ignored by the server.


Here is a FREE Windows95 and NT version available to download. It is from this website; (, so if there are problems with the download, please visit the site for assistance. This program will create the necessary encrypted password entry for the user. htpasswd takes an optional parameter, followed by the location of the user database file and the new user name. The optional parameters are -c to create a new database file, and -d to delete the named user from the specified user database file.


    c:\ htpasswd
    Usage: htpasswd [-(c|d)] <passwdfile> <username>c:\ htpasswd -c %docroot%/webpasswd fred
    New Password: *******
    Re-type new password: *******

    c:\ htpasswd %docroot%/webpasswd barney
    New Password: *******
    Re-type new password: *******

    c:\ htpasswd -d %docroot%/webpasswd betty

The Password “*” characters are added for clarity, and do not appear when using the program.

Group Files

Group files allow users in the user database files to be grouped logically together. Group files are plain text files, each unique group is stored on an individual line in the file. This is the same format as traditional UNIX group files. Users may be members of multiple groups.


    groupname:user1 user2 user3
    crew:kirk spock bones
    command:kirk pike

Authenticated Dynamic Control Directives

The Authenticated access control directives provide the server with the necessary information to authenticate client requests.

Authenticated access control is applied to a realm. Realms are used to describe the areas on a site which are protected. A realm will usually take the form of a directory containing the protected resources and an appropriate dynamic configuration file. Once a user has supplied a valid username and password combination for the realm, any other areas of the site which are protected with the same realm details can be accessed with the same username and password. This allows different portions of the document tree to be protected, while only requiring the user to login once. It also allows the client to cache the username and password details if the user should return to the same realm at the same site.

AuthName : Each realm is identified by the AuthName directive. The directive takes a text string parameter which is used as an identifier for the realm. The string is usually displayed by the browser when prompting the user for a username and password.

    AuthName Realm
    AuthName Company Protected Pages
    AuthName Subscription Service

AuthType : Specifies the method by which the client should transmit the username and password to the server. Currently the only supported option is “basic”, however another method, called “digest”, which offers additional security is currently undergoing approval by the Internet Engineering Task Force.

AuthType BASIC

Basic authentication instructs the browser to send the password to the server uuencoded. This is not plain text, but it is also not encrypted. Basic authentication should be treated with the same security considerations as telnet, but as HTTP requests are far more frequent then telnet logins the chance of having your password “snooped” is increased. It is for this reason which we recommend that you do not use your /etc/passwd file as your user database.

AuthUserFile : Gives the location of the user database file which the server is to use to authenticate the client. This should be readable by the server and created by the htaccess program. If the filename begins with a ‘/’ it is considered an absolute pathname, otherwise it is relative to the directory in which the .htaccess file lives. In the later case, for extra security you could also restrict people from down-loading the password file by using the <file> and deny from all directives.

    AuthUserFile %docroot%/webusers
    AuthUserFile %docroot%/passwords
    AuthUserFile mypasswds

AuthGroupFile : Gives the location of the group file which the server is to apply to the user database. If the filename begins with a ‘/’ it is considered an absolute pathname, otherwise it is relative to the directory in which the .htaccess file lives. In the later case, for extra security you could also restrict people from down-loading the group file by using the <file> and deny from all directives.

    AuthGroupFile %docroot%/webgroups
    AuthGroupFile %docroot%/departments

AuthDBMUserFile : Gives the location of the user database stored in DBM format. The user file is keyed on the username. The value for a user is the crypt()-encrypted password, optionally followed by a colon and arbitrary data. The colon, and data following it, will be ignored by the web server.

AuthDBMGroupFile : Gives the location of the group database stored in DBM format. The group file is keyed on the username. The value for a user is either a comma-separated list of groups that user is in, or a value of:

UNIX crypt()ed password : Comma-separated list of groups [ : (ignored) ]

A file in the later format can be used for both the user/password database and the group database.

Require : Takes a list of names, of either users or groups: allows the named users, or users in the named groups, to access the protected resource. The first word after Require should be either group or user to indicate that the subsequent words are group-names or user-names, respectively. If several Require directives are given, the combination allows access by any user allowed by any of the directives.

Require user admin
Require group flintstones jetsons

Additionally the valid-user parameter can be used to include any valid username/password combination: it effectively stands for user followed by a full list of all the usernames listed in the password file.

Require valid-user

The following examples will give you an idea of what Zeus dynamic configuration files should look like, and what can be achieved using them. Pages locked to all but valid users.

    AuthName Locked Web Pages
    AuthType basic
    AuthUserFile %docroot%/userdata
    Require valid-user

Pages locked to all but users in the webmaster group.

    AuthName New set of pages
    AuthType basic
    AuthUserFile %docroot%/userdata
    AuthGroupFile %docroot%/groupdata
    Require group webmaster

Pages locked to all but users in the management group and the admin user.

    AuthName Quarterly Sales Figures
    AuthType basic
    AuthUserFile %docroot%/users
    AuthGroupFile %docroot%/groupdata
    Require group management
    Require user admin

Using Authentication and Access Control Together

You can mix and match different security polices, using access control where needed, authentication where necessary, and both together where required. The more sophisticated policies make use of the <remoteip> and <remotehost> section directives. These section directives are more flexible than the allow and deny directives as they can enclose other directives.

For the purpose of these examples, the network is considered local.

Require all access to be from a local machine and be authenticated:

    Order deny,allow
    Deny from all
    Allow from 10.0.0.
    AuthName Protected
    AuthType basic
    AuthUserFile %docroot%/users
    AuthGroupFile %docroot%/groupdata
    Require valid-user

Allow access from all local machines but require authentication for all external access:

    <!RemoteIP 10.0.0.*>
    AuthName Protected
    AuthType basic
    AuthUserFile %docroot%/users
    AuthGroupFile %docroot%/groupdata
    Require Valid-User

Allow access from but require authentication for all downloads from elsewhere:

    <!RemoteHost *>
    AuthName Password for Download
    AuthType basic
    AuthUserFile %docroot%/users
    Require valid-user

Limit PUT publishing to members of the publishers group, and only from local machines

    <limit PUT>
    order deny,allow
    deny from all
    allow from 10.0.0.
    AuthName Password for upload
    AuthType basic
    AuthUserFile %docroot%/users
    AuthGroupFile %docroot%/groupdata
    Require group publishers

Allow publishing from local machines with password, and from remote machines with a different password.

    <limit PUT>
    <remoteip 10.0.0.*>
    AuthName Password for upload
    AuthType basic
    AuthUserFile /etc/passwd
    AuthGroupFile %docroot%/groupdata
    Require group publishers
    <!remoteip 10.0.0.*>
    AuthName Secure Password for upload
    AuthType basic
    AuthUserFile %docroot%/securepasswds
    AuthGroupFile %docroot%/groupdata
    Require group publishers

Restriction Directives

Restriction directives can be used to limit the server’s functionality. It may often be desirable for some facilities to be disabled for portions of the document tree, such as those which contain user home directories.

Options The options directive allows CGI facilities to be disabled.
Syntax : Options option
Parameters : all, execcgi Allow CGI programs stored in the directory to be run
none Disallow CGI programs stored in the directory to be run

Default behaviour is : options all


    #turn on cgi programs in docroot
    options execcgi
    #disable them in user home directories
    <directory /home>
    options none

AllowOverride The allow override directive sets the extent to which dynamic configuration files can change higher level settings. The AllowOverride directive can only be used in the global dynamic configuration file.

Syntax : AllowOverride override override …
Parameters : all Allow dynamic configuration files to override all directives.
options Allow use of the options directive
fileinfo Allow the addtype and forcetype directive
authconfig Allow the user authentication directives, authuserfile, authgroupfile, authtype, authname
limit Allow the <limit> section directive
none Disallow all dynamic configuration directives: .htaccess files in affected directories will be ignored.

Default behaviour is : AllowOverride all

Setting AllowOverride none prevents the server from looking for .htaccess files in affected directories. This can offer a substantial performance advantage, and on busy sites you should disable the server from looking in commonly accessed directories for .htaccess files when those directories will never contain .htaccess files. For example, if your document root is /disk2/web/, the web server will look for htaccess files at:


Please note: NetRegistry clients should use %docroot% as their %docroot%.

as well as the global htaccess file if set. Most webmasters don’t put .htaccess files above the document directory, so as a performance improvement, you can let the server know that it shouldn’t look for them outside the document root. E.g.

    <Directory />
    # Prevent the server from looking at .htaccess files anywhere …
    AllowOverride none
    <Directory /disk2/web/>
    # … but the subtree which might contain some.
    AllowOverride all

Directory Directives

Directory directives can be used to map directories outside on the file system to locations within the document tree. Directory directives are also used to specify the location on the server of extension programs, such as CGI programs or ISAPI modules. Because extension programs can be a potential security hazard, it is often desirable to limit where they can reside. Allowing all users to run extension programs should be discouraged.

Directory directives can only be used in the global configuration file.

Alias The Alias directive maps a location on the file system to a virtual location in the document root.

Syntax : Alias virtual-dir logical-dir
Parameters : virtual-dir The virtual path name used by the web server
logical-dir The absolute path name to the directory on the local file system


    # Maps FTP images directory to /web/images
    alias /web/images /home/ftp/pub/images

ScriptAlias The ScriptAlias directive allows CGI programs to reside outside the document root. Any file in a ScriptAlias directory is regarded as a CGI program and when accessed will be run by the server. CGI programs in a ScriptAlias directory do not need a mime type of application/x-httpd-cgi as they would in the docroot.

ScriptAlias directories allow you to manage you site more securely. CGI programs run on the same machine as the web server and will consume resources. It is possible that a badly written CGI program could bring down the whole machine by starving it of resources. For this reason it is advised that you only run approved CGI program on your system.

Syntax : scriptalias virtual-dir logical-dir
Parameters : virtual-dir The virtual path name used by the web server
logical-dir The absolute path name to the directory on the local file system


    # Maps customers approved CGI programs to /cgibin
    ScriptAlias /cgibin /usr/local/web/cgi/approved
    # Add local cgi programs to /cgibin
    ScriptAlias /cgibin /usr/local/web/cgi/localscripts

ISAPIAlias The isapialias directive allows ISAPI modules to reside outside of the document root. Any file in an isapialias directory is regarded as an ISAPI module and when accessed will be run by the server. ISAPI modules in a isapialias directory do not need a mime type of application/x-httpd-isapi as they would in the docroot.

Given that a badly written ISAPI module can crash the web server process itself – if run in-process – or the ISAPI-runner process if run out-of-process, you should take care to check any isapi modules before allowing them to run on your system.

Syntax : isapialias virtual-dir logical-dir
Parameters : virtual-dir The virtual path name used by the web server
logical-dir The absolute path name to the directory on the local file system


    # Maps customers approved
    ISAPI modules to /isapi
    isapialias /isapi /usr/local/web/isapi/approved
    # Add local isapi programs to /isapi
    scriptalias /isapi /usr/local/web/isapi/localscripts

Please note: to install Windows compiled ISAPI modules, please contact NetRegistry Support

Miscellaneous Directives

PassEnvAuthorization on|off (default off)
Only available in a global htaccess file. Can be embedded in any sectioning directive. When set to ‘on’, the environment data for dynamic applications (CGIs/JServ/FastCGI/NSAPI etc) will contain the client’s ‘Authorization:’ header as ‘HTTP_AUTHORIZATION’. This allows the application to perform its own access control. e.g.

    <Location /distributed/>
    PassEnvAuthorization on

Obviously only ‘trusted’ applications should be given the client’s password information for their access control purposes.

Application servers such as Zope & some Java Servlets require this information.
PassEnvAuthorization is a Zeus extension to the Apache specification.
Using Wildcards and Regular Expressions
All the section directives except <limit> can make use of wildcards and regular expressions to define their criteria. Wildcards allow simple pattern matching while regular expressions allow more powerful and complex pattern matching.
* Any sequence of characters
? Any single character

Using wildcards in <files> section directive, in an example directory consisting of the following files:

    <files *> footer.html
    <files *.html> footer.html
    <files logo.*> logo.gif
    <files logo*> logo.gif
    <files logo?.gif logo2.gif

For information on regular expression syntax and usage, please see the Zeus tutorial on using regular expressions, which contains examples and hints on writing expressions.

Firefox and Google speed up your site

Prefetching Hints – Helping Firefox and Google speed up your site

The Prefetching Problem

Wouldn’t it be better to download the next page we’ll want to click while we’re reading the one before? That’s the thinking behind prefetching, whether it’s done by the Firefox browser or the Google Web Accelerator. There’s been a lot of controversy about whether browsers should do this kind of thing. If a site is on fast enough hardware and has a lot of bandwidth to spare, it makes sense to let users download pages they’re likely to want in advance. On the other hand, for a site with limited resources, a bunch of clients downloading pages they may not even look at will only slow things down for everybody.

Clearly the problem is not with prefetching itself, it’s with deciding which pages to prefetch. The browser has no idea how busy the server is or how much spare bandwidth it has. Not only that, it also has no reliable way of telling a link the user is likely to click from a link that nobody cares about.

As web server administrators, on the other hand, we know about all these things. We have data about how much bandwidth our sites are allowed and how much they are using, which pages are cheap to deliver and which ones involve expensive database queries, how much memory we’re using, how much strain the CPU is under – everything we need to judge whether prefetching our pages will make things better or worse for our readers. Not only that, we also have our web server logs, giving us real data from real people about which pages our users like to click, and where they are likely to go next.

I will suggest a couple of things we can do to take control of the prefetching process, discourage badly-behaved clients from prefetching too much, and give the browser the information it needs to make our users’ experience better.

Providing prefetching hints

Now we’ve dealt with browsers trying to do things the wrong way, let’s provide some hints to help the ones that are trying to do it right.

The conventional way to tell the browser to prefetch something is to put a <link> tag in the body of our page. For example, if I think there’s a good chance someone reading this page will want to go and look at my top page as well, I can stick something like this in the head of my HTML document:

<link rel="prefetch" href="/index.htm">

That’s fine if we know what we’ll want people to prefetch when we make the page. But we probably don’t. People won’t necessarily click what we think they’re going to click, and we want to be able to adjust how much is prefetched according to how much spare capacity we’ve got on our server.

So instead, let’s keep our prefetching rules separate from our website content. Rather than putting <link> tags in every page on our site, we’ll inject prefetching hints into the headers of the responses that our server sends to the browser. That way we can easily regenerate the rules to keep up with changes in usage patterns, and scale back or turn off prefetching altogether if our server gets too busy. (Many thanks to Darin Fisher for his help with this.)

If we haven’t already done so, we’ll need to turn on apache’s mod_headers.

In apache2, we can do that like this:

a2enmod headers

…then get apache to reload itself with:

/etc/init.d/apache2 force-reload

…or similar.

Now let’s try making a prefetching hint for Firefox. When we’re done, the following will tell the Firefox to prefetch the top page of my website when it’s finished downloading this page:

<IfModule mod_headers.c>
   <Location /programming/pf.htm>
      Header append Link "</index.htm>; rel=prefetch"

I’ve called this file prefetch.conf and stuck it in my apache2 configuration directory (/etc/apache2). To tell Apache to read it, we need an Include statement in the configuration file like this:

Include /etc/apache2/prefetch.conf

Once we’ve reloaded apache, we should be able to check our logs and find that requests for /programming/pf.htm are immediately followed by prefetch requests for /index.htm.

If this doesn’t seem to be working, you may want to check whether the Link header is really being set. You can either use Firefox’s Live HTTP Headers Extension or do it the old-fashioned way with wget -S. When testing, bear in mind that the file we’re pre-fetching may already have been cached by the browser. It might be easier to test this by telling Firefox to pre-fetch a non-existent file, then checking for the resulting 404 in the server logs.


Get every new post delivered to your Inbox.

Join 1,241 other followers

%d bloggers like this: