You are here

HOWTO: Apache 2 SSL Name-Based Virtual Hosting

I recently reported being stuck trying to set up multiple SSL name-based virtual hosts on the same IP address with non-SSL name-based virtual hosts. Soon after, I figured it out. Shortly after that, one of my students suggested the same solution to me…

The key idea in achieving SSL name-based virtual hosts in Apache 2 is this: Apache's decision as to whether or not the connection is SSL is made by port number early in the process, just as one would hope. If Apache decides that the connection is SSL, it presents the site certificate.

For a self-signed cert, the vistor's browser will put up the normal "this cert is no good" warning. Since each site can have only one cert (since Apache otherwise has no idea which one to present, as the connection hasn't yet been established), the visitor's browser may also put up a "this cert doesn't match the site" warning. Big whoop.

Anyway, the only slightly tricky part in all this is understanding the rules Apache 2 uses to select a name-based virtual host. Recall that for a non-SSL host, Apache 2 will default to the first VirtualHost it finds in its config if it can't match the incoming name to a specific virtual host. It turns out that for an SSL virtual host, the rules are the same, except that it only considers the VirtualHosts on the SSL port.

So here's how:

  1. Find the "Listen 80" directive in your Apache 2 config. Add a "Listen 443" directive on the following line.
  2. Find the "NameVirtualHost *" directive in your config. Change it to say "NameVirtualHost *:80". Then add "NameVirtualHost *:443" on the following line.
  3. All your VirtualHosts currently are probably set up as "<Virtual Host *>". Change this to "<VirtualHost *:80>".
  4. Add a default SSL virtual host early in your config by using "<VirtualHost *:443>". You can decide whether this should return a 404 or do something useful on random SSL connections.
  5. Go nuts adding more SSL name-based virtual hosts.

The word on the street is that the Apache folks don't want you to know about this because it's "wrong"; presumably they're concerned about getting users used to clicking "OK" on host/cert mismatches. There's a proposal underway to modify SSL so that the web server can find out the host being requested before setting up an SSL connection and present the appropriate cert; I hear that many of the Apache folks don't like that either for some mysterious reason. For myself, I'm unwilling to pay for a commercial cert (and unsure why SSL can't have a mode that merely provides encryption without authentication), so I'm happy to have my stuff just work.

Hopefully I'm the only one who's going to hit my personal to-do list site anyhow. Fob



Thanks alot! I agree on the encryption without authentication (encrypted, not trusted), I also just ran into this problem and got 80% of the way realising the need for *:443 on the default site but didn't realise you need *:80 on the others and reverted because things were getting messy. I will try completing this now, thanks again and you can be sure that the reason for the lack of support for multiple ssl vhosts is something to do with money.

Yes your a certified genius it worked, as soon as I replaced 'VirtualHost *' with 'VirtualHost *:80' on my normal hosts, added the default SSL directives (for random https) on a '*:443' and then just the one other SSL config on another '*:443' it worked! No complaints and with a few redirects it's tidy, thanks alot man!

While there's nothing you can do about the self signed warning without installing your own CA certificate (which only works for browsers within your control), you may be able to eliminate the site mismatch IF you're dealing with subdomains.

You can have, for example, and as named virtual hosts on port 443 if you have a wildcard certificate of * This won't help with completely different domains, which will require different port numbers as you discovered, but it's a great help for subdomain hosting. Most documentation on name based virtual hosts with SSL seems to neglect this.

I'll need to think about what games one can play with wildcard certificates. In particular, can one generate a self-signed certificate for *.org, or for just *? Fob

One of my students suggested the right way to handle this one: certify the IP address rather than the name. This should get rid of the host mismatch dialog for whatever hosts are attached. I haven't tried it yet, though. Fob

This was exactly what I was looking site that does ssl and nonssl and another site that was strictly nonssl..


Works a treat. I knew it was possible because a Plesk server install does it with no problem allowing multiple sites serving SSL on the same IP, and that's just a hacked Apache config surely?

Anyway this works a treat. I used my server IP instead I.e. and and it still works great. Many thanks.

I totally agree about the enc-auth but not trusted issue. I need to offer my users SSL control panel and webmail access to mitigate again wi-fi session hijacking. To do so I either buy one cert for a common domain or force SSL on https://webmail.[domain.tld]

The later is much more desirable. Users bless-em can remember their own domain but are hard pressed to remember anything else. The simpler the better and as long as they know that its there domain that they are accessing they don't care about the cert not being trusted.

Although I completely understand Apache's concern regarding users getting too used to just clicking any old cert. The issue is entirely to do with keeping access encrypted in this instance.

So I'm all for keeping this way open and free.

Glad it worked for you.

I wish the browser developers would support the Diffie-Hellman Ephemeral (DHE) mode of TLS. This would be the right way to handle this kind of case. It looks like Firefox just might, but I'm not sure how to test it, since I don't see how to get Apache to do the other end offhand. Bleah. Fob

Cheers, needed to set up a test PC and couldnt add extra IPs dues to cmpnay tied down XP box. This solution was sweet, although I still get told off by apache: "[warn] Init: You should not use name-based virtual hosts in conjunction with SSL!!", its only a warn and all 3 test domains work just fine...

Thanks again!

Thanks for the read. This info is hard to come by for some reason.

My question is:

Why wont apache support multiple identities on the same IP:PORT?

My guess is either like the above poster said, something about money or a "legitimate reason". Ever notice how ISP's whore IP addresses given out for customers' VDS/Dedicated servers?

The answer may also be that some developers think having multiple identities on the same IP is absurd. I personally can't figure out the connection between unified web servers (having one ip for who knows how many sites/services) and authentication.

My argument is that even though 2 sites with 2 unique certs have 2 unique ips, are not guaranteed to be on 2 physical machines. Technically there is no limit to how many ips a server may be alloted.

Having an HTTPD act in this way is IMHO ridiculous.

Doesn't IIS have this kind of support?
Why do we even have to put up with this kind of crap.

I'm tempted to make my own damn httpd just to spite this nonsense.

The problem with supplying multiple SSL certificates (one per virtual host) is this:

The SSL session is initialised before the HTTP protocol commands are sent. This ensures that all HTTP commands are protected by SSL.

The command that tells Apache which host has been requested is an HTTP command; this means that Apache must select the correct SSL certificate before it knows which host to use to choose the certificate.

One solution is to use one SSL certificate with a SubjectAltName attribute, but this only really works for internal use, since no CA will generate a certificate with a number of unrelated SubjectAltName records.

If you do want to find out more, check out these two pages:

easy-rsa with subjectAltName support

easy-rsa with subjectAltName support (update)

It's good to know the state of SubjectAltName; it would sure be nice if cert vendors supported it, but then again it would be nice if there was no such thing as a cert vendor. Smile

At any rate, modern browsers support the Server Name Identification (SNI) extension to the TLS standard, which permits the server to find out what virtual hostname is being requested before TLS negotiates its certificates. Apache's mod_gnutls can handle the server side, and I believe there's native Apache support for it now also. Fob

Session caching somehow fixes this problem (I think)

mod_gnutls allows apache to communicate with the browser with TLS

  • Support for SSL 3.0, TLS 1.0 and TLS 1.1.
  • Support for client certificates.

I think it may be part of evolution for these sort of things.

Thanks much for the pointer to mod_gnutls. The mod_gnutls Server Name Indication (SNI) support solves the fundamental problem that prevents the solution I propose in this blog post from being ideal. Without SNI Apache can't decide which name-based host certificate to present until after it has already negotiated the TLS connection, at which point it is too late. With SNI Apache can find the appropriate certificate before TLS initialization, and present that certificate during the TLS negotiation process.

At least in theory. I'll try it out RSN.

It looks like mod_gnutls also has support for SRP passwords, which I'm not sure I would use, and for DHE key negotiation, which I'm sure I would if browsers support it yet. I need to check this soon too.

I'm also curious about the OpenPGP support, although I don't understand it very well yet. Soon.

Interesting times. Thanks again. Fob


Just stumbled upon this page. This 'dummy' Virtual Host, how does this look like?

Just put this in my config?

(the dummy)

ServerName of all the webstuff ('real' SSL site) Regards.

I'm not quite understanding your question. There's a default VirtualHost: that's just the first one listed in your config files. There's nothing terribly special about it, except that it's the one that people will get if Apache can't find any information (such as a ServerName directive that matches the URL) that says they should get some different one.

With this plugin for Firefox there is now a very good reason to do configurations like this.

I have been testing it on a few websites, and it works quite well.

PS: The CAPTCHA is ridiculous. It has taken me over 20 tries to find one I can actually read.

Thanks for the pointer to the Perspectives project. That looks really interesting, and fits well with some ideas I've had over the last couple of years.

Sorry about the graphic CAPTCHA problem. I had easy text CAPTCHAs for a long time, but finally folks seemed to be either auto-cracking them or just plain cracking them. Either way, I felt forced to return to graphics, which are admittedly bad. I'll try to tune the parameters to make them a bit easier to solve, though. Fob

I just looked at the generated CAPTCHAs, and the config had got screwed up at some point. Those were really unreadable! My apologies, and thanks much for your patience in pointing out the problem to me. Fob

Thanks for this it worked for me too. Now I run two virtual hosts on port 80 and one on 443, and the secure host doesn't accept unsecure connections at all.

I am going to try to use this in combination with a Wildcard SSL certificate...

You can ship SSL certificates for free with some authorities like

... your captcha is very very secure even for a human ... Sad

I don't have a Windows install handy to test on, so I'd be curious whether SmartCom got a root cert into IE. They seem to have gotten one into Iceweasel, which is pretty cool.

I'll try signing up for a SmartCom cert and see how it goes. Thanks much for the pointer! Fob

You can not have encryption without authentication. First we are using encryption to fight man in the middle attack. But if the man installs a reverse proxy and redirects all the traffic for your site to his proxy you will have an encrypted connection between the browser and the reverse proxy, an encrypted connection between reverse proxy and you site, but the traffic will be unencrypted on the proxy site. Everyone (the client and the server) will think falsely that the data is unaltered and private.

The conclusion is that a selfsigned certificate on a web site is insecure. The only way to make it secure is: when you accept the selfsigned certificate to check by other means that the certificate is valid for that site (eg calling the webmaster of the site and asking for the fingerprint of the certificate).

It is true that a self-signed cert or a standard Diffie-Hellman key exchange by itself will not protect you from a man-in-the-middle attack. However, this is not the principal threat to web clients and servers today.

Consider the normal workflow of a typical web user. The user types or clicks a URL, which causes their local DNS client to query the DNS system for a corresponding IP address. If this lookup is not performed safely and correctly, there are opportunities for a host of security problems; for the moment, let's assume that this lookup is performed properly, either through some kind of DNSSEC or just because the user's trust path hasn't been subverted. To the best of my knowledge, in 20 years of converting hostnames to IP address I have never been maliciously given a bogus IP by an attacker.

Once the user has a valid IP address for the service they are trying to reach, mounting a man-in-the-middle attack against a self-signed cert or EDHE becomes quite difficult. In particular, the proxy server approach you describe seems pretty much infeasible in practice; it would require either quite broad access by the attacker to protected Internet infrastructure or some particularly fortuitous and clever set of TCP/IP tricks of which I am unaware.

So, if you believe that there is no grey area between "secure" and "insecure", which is certainly a valid position for you to take, then I agree with you that a self-signed cert or an EDHE exchange is insecure.

I, on the other hand, believe that there is such a thing as relative security. For most kinds of transactions in which I participate, I am willing to assume the small risk of MITM in exchange for the ease of creating an SSL connection using self-certs or EDHE.

Remember that typically the only alternative is to run in-clear except for transmitting passwords using the fairly flimsy HTTP digest auth protocol. In the worst case, you'll be faced with using the HTTP basic auth, which doesn't even protect the passwords. Don't even get me started on OpenID security. (I'm serious. Don't get me started.)

It would be nice to at least have a secure way of validating a DNS mapping. With that and ubiquitous EDHE, I think we'd be ready to secure all but the most critical transactions on the web. Fob

Hi Bart,

Just to be sure: are you "just" trying to create several SSL/TLS-enabled virtual hosts on the same box and do you put up with the browser warning that the single site certificate does not match the URL host name?

I spend some time understanding exactly what you tried to do, and did not assume that your trick would bypass the common limitation "one SSL site per IP address" without, important, setting off any browser alert.

After some experimentation I found out that I can indeed get more than one SSL site functioning, if I do accept the browser security exception for certificate mismatch.

This is perfect for limited deployment but of course not good enough for anything production. No problem, as long as this limitation is clear.


Yes, the way I described setting things up here you will still get cert mismatch warnings in your browser. As I explain in some of the other comments, this is caused by a fundamental limitation of the current TLS protocol (the CERT is checked by the browser before any transaction occurs, so the server has no way to konw which URL the browser is trying to access) that is hard to work around. There's an alternate protocol floating around out there that removes this limitation, but I'm not sure what the state of it is currently.

We've talked in the past about some tricks like signing the IP or signing with DNS wildcards, but I never got around to playing with these. I would be curious if others have. Fob

Just wanted to say thanks for giving me the final piece: Setting up your first vhost (I use localhost as my first vhost) with both port 80 and 443. After that they all fall into place.

Just to re-iterate to some who may not have gotten this one particular point: This isn't a way of cheating ('cause you will still be warned that the certificate is NOT authentic), but for a web developer who needs reproduce a production environment as closely as possible on his development machine, this is the trick.

I have hundreds of sites I've worked on, or currently maintain, some are commerce solutions with shopping carts (for example), and I need to have it switch from http to httpS, this is the way to go.Since only very few people ever see the existence of this particular network, the autentication is not needed, 'cause we all know who we are.

Support for SSL Vhosts has nothing to do with money.

The problem is that the HTTP request header that indicates which hostname you're visiting (eg. "") is part of what's encrypted by SSL. Since every hostname (usually) has a different certificate, the web server can't know which cert to present until after the connection has been established (which requires a certificate.)

A chicken/egg problem...

The solution is to invent a protocol where the browser indicates what hostname it's accessing before the SSL handshake occurs, but that has security implications too... and you can't encrypt the hostname securely without using a certificate (to prevent data leakage from "man in the middle" attacks)... So that's another chicken/egg problem.

The only real exception is when wildcard certs are used, and the method documented above works just fine for that.

Lack of support for TLS vhosts may have a little to do with money. As I keep saying, EDHE support would be sufficient for most TLS applications; however, then the revenue obtained in exchange for placing root certs in the shipped browser would presumably decrease.

I don't understand the "security implications" of the client presenting the hostname before the TLS negotiation? It seems to me that this isn't really going to be much of a secret anyhow; how could an adversary use it to advantage? Fob

I think the security issue wouldnt be in knowing the target website (I agree with you on that), rather on the INTEGRITY: if you dont protect that part with a certificate you cant be sure that first unprotected part gets as it is to the server. Its a minor issue, cos no contect disclosure is implied, but anyway it can result into a DOS.


I suppose the DOS is a bit worse than with a separately-certified link, since more of the transaction will happen before any intruder is booted—is this what you're thinking? Fob

I'd like to set up a development server for two applications in such a way that both applications serve both secure (SSL) pages and non-secure pages. Is it possible to use your solution for this? I've tried a bunch of things and I can't seem to get it to work.

Right now, one of the vhosts works fine in SSL and non-SSL mode, but the second site will only work in non-SSL mode - all requests on the SSL channel go to the first site. Any advice would be greatly appreciated.


Actually, I think I must be missing something in your description (like what the default ssl vhost should look like). Any chance of a small example config?

Here is a gzipped tarball containing a four-host config—two secure, two unsecure—that I set up as a sample behind my firewall to verify that it all works. It does. Apache 2.2 on Debian unstable.

It's really easy to make a mistake in the complex setup of Apache's SSL. If you do screw up, it typically does other than give you an error message, much less a clear one.

Note that Iceweasel 3 tries hard to give the scariest messages in the world before letting you browse against a site with a self-signed cert, but it eventually does give up if you keep clicking through. If they'd just support DHE, I'd have a lot more sympathy with this bogus behavior. "No legitimate site" my behind. In any case, the security is still "better" than an unencrypted http connection. Fob

I have a fairly simple solution to hosting several secure hosts on one ip address without having to enter the port number in the address. The certs will work without complaining. see

Yes, your technique of rewriting a URL to contain a port number works, and people use it. There are quite a few niggling little issues with serving http on a different port than port 80: firewalls sometimes get in the way, proxies must be properly configured, etc. But as you point out it does avoid the browser certificate whininess stuff.

Thanks much for the note. Fob

I don't understand how it work, could you please to explain step by step?

I understand now after reading this website:

Glad you got it to work. Thanks much for the nice link. Fob

I've been toying with the ideas presented here:

The basic idea is that you can have multiple names present in a single certificate. This seems to work right now (06MAY2009) in more browsers than the up-and-coming SNI support.

All this is a good exercise for test / dev / staging servers, but I'm thinking that multiple IPs (one per vhost/app hosted) each with it's own real cert is going to work best and give the cleanest experience to the user.


Thanks much for the really informative link!

Obviously, if you can afford and pull off a separate IP and commercially-signed cert for each vhost, that's the way to go. I'm putting up a lot of stuff for free in environments with limited IP addresses, so I'm happy to find alternatives. Fob

Someone above pointed out that wildcard certificates can be used to obtain a fully trusted SSL connection between your users and multiple apache SSL vhosts, this is true, however;
1. Wildcard SSL Certs are quite expensive
2. They don't work that good on mobile devices (phones).
You could get a SSL certficate provider that lets you use Subject Alternate Names (SANs) in the SSL certificate. That gives you a somewhat limited wildcard certificate, ie the cert will work for all SANs that you have specified in it. Most providors have it now, and the number of SANs that they let you have in the certs vary from 5-40 from what I can tell.

Apache 2.2.8's mod_ssl (compiled against OpenSSL 0.9.9) now supports SNI, which then let's you specify ONE CERTIFICATE PER VHOST, which of course is the optimal. SNI browser-wise is supported in;
* Opera 8+
* IE 7+
* Firefox 2+

Just my $.02


A much-belated thanks for your information. Great news that IE and Firefox now support SNI! I assume that Safari does too.

The SAN / multi-CN stuff is good to be reminded of also. Does it really address the problem of multiple VHOSTs per IP, though? I'm guessing it would turn off some of the whining machine in the browser, but that the initial check would still fail? Fob

What about a site that is on a network behind nat that is published on the internet and also accessed from the intranet?

It will have two distinct names but we can only present one certificate....

Sorry if this does not make much sense as I am trying to learn about apache.

If your website is behind NAT, it's normally not reachable by external IP, so there's not much you can do certs or otherwise. Fob

Your solution only makes sense to people who do not care at all about security, because it is totally insecure. So I wonder, why you want to use SSL at all, when you do not care about security?

Second: your solution never was a "secret", it is just not the way someone should go.

The way to go is using multiple IP addresses. This is dead simple. If your provider does not support multiple IPs for the same machine, you should change the provider.

Third: there is nothing bad about self-signed certs. The only problem is, that you have to give your users a secure way to trust your cert. You could do this by letting them install your CA-cert into there browser and getting it via a trusted connection. The problem doing this trustfully: it takes some efforts on your side for each and every user, which makes this solution usable only with a very small number of users.

Fourth: Never ever tell your users they should trust a certificate by clicking "Accept", when the browser tells them, they should not. Security is not only a technical issue but also a social issue.

What you suggest is the same as telling everybody, they should hide their door keys under the doormat because no thief will suspect it there.

Wondering if you really know what you're talking about…

Security systems in the real world always represent a tradeoff between degree of security and degree of convenience. There are some websites, for example, where the security is almost nonexistent because convenience is quite important to the providers and users. There are other sites, such as (hopefully) your bank's, where it is acceptable to make the site quite difficult to provide and use in order to achieve a high level of security.

The claim that name-based vhosts sharing an IP address is "totally insecure" is either ignorant or disingenuous. Having multiple SSL/TLS name-based virtual hosts share the same certificate still protects the traffic against third-party eavesdroppers—by far the easiest and most common kind of attack against a website. This approach is also about as resistant as separate IP-based vhosts to all man-in-the-middle attacks that I am aware of. In fact, I would be quite interested to hear of a realistic scenario in which there is a big problem here. Once browsers and servers dependably implement DHE (do they already by now?) we can dispense with the fiction of certs altogether and just provide cheap encryption as protection against eavesdropping directly.

You claim that "Your solution never was a 'secret', it is just not the way someone should go." On the contrary, at the time I originally posted not only was it not common knowledge how to set this up, I was explicitly told by several people in person and by written documentation that it was not a possible configuration. See my notes here for all the details. I believe that the method I taught has since become more widely known, but a lot of people still seem to find the security tradeoff it offers useful to them.

You claim that "The way to go is using multiple IP addresses. This is dead simple. If your provider does not support multiple IPs for the same machine, you should change the provider." As I'm sure you're aware, the limitation of multiple static IPs is not typically some defect in the provider. It is that additional static IPs represent a significant extra expense—one that folks may not be willing to pay for some kinds of websites. Again, this is an engineering tradeoff that folks can make for themselves.

I have no idea why convincing clients to click a link that installs a self-signed cert they supposedly are getting off my website into a browser is supposed to be more secure than just having them accept whatever cert the browser presents. Either way they run a number of risks, but the second way at least they won't be carrying around a cert that can silently be accepted by other malicious websites.

You suggest that I should "Never ever tell your users they should trust a certificate by clicking 'Accept', when the browser tells them they should not. Security is not only a technical issue but also a social issue." This advice is good only as long as the browser is giving good advice about what is risky. In this situation, the advice is not good, and telling users they should blindly trust it is doing them a disservice. If you want me to trust your security, don't lie to me. If the browser's advice was so reliable, they wouldn't provide an "Accept" button—but in the current environment users would rightly move to one that did.

"What you suggest is the same as telling everybody, they should hide their door keys under the doormat because no thief will suspect it there." No. No it is not. That is an incredibly bad analogy. What is the "door key" here? What is the "doormat"? The analogy doesn't even make sense, and I have to believe that you're quoting it from some other context without understanding it.

I hate argument by analogy, especially the "burglars and houses" analogy that has served us so very, very poorly in the computing community. However, if you were to force me to go there, I would say that my advice is something more like "If you own a number of storage lockers that store things of little value, it is OK to give clients of any of them a key that works for all of them. This is especially true if you have an individualized lock of some kind on an inner door of each locker."

My colleagues and I are university professors with strong industry backgrounds and a solid grounding in computer security. We have spent years and years thinking carefully about the security consequences of our systems actions. I see no evidence of similar thought from you. To be blunt, you seem to me to be the kind of person that gets confused, then confuses the less-knowledgable into using all kinds of cargo-cult computer "security" measures that are minimally secure while being maximally inconvenient.

Please stop. Fob

my question is why not configure a default virtual host on 443 and others with a redirect to the relevant free port and and for each ports the the dir and cert etc... configs ?

(sorry for my poor english)

I don't know of any way to do as you suggest except to include an explicit port number as part of the URL given to users. This has a number of problems; it's hard to remember, it's confusing to users, and most importantly it doesn't work through a lot of firewalls, routers and such that "know" that HTTPS traffic travels only on port 443.

I don't think there's any reason to redirect at all in the scenario you suggest? Just configure Apache to offer an HTTPS connection on a different port for each service, and you're done. You can give each its own cert and such.

It's not a bad setup in some situations, but like I say there are some issues with it for general deployment. YMMV. Fob

I'm using self-signed certs for a site run out of my house. I wanted a way to get at pages/info and school assignments. Why should I pay for a cert? The SSL redirects and multiple VirtualHosts were a problem tho. - I do web development so I need multiple virtual hosts - I have a default site that I generally want people to go to.. - Anything having to do with school I want redirected to a secured and passworded off location. I'm pretty sure my prof's would not be happy about their assignments being open to the Googles... So... after a look around, and some head banging, here's how I did it 1. Use named based Virtual Hosts. 2. Do the *:80 and *.443 as described above. 3. Default sites go at the top of the list! 3a. The first VirtualHost is the site I want as the "default" secured site 3b. Directly below the first default site, is the site -- the default insecure site. 4. Each VirtualHost then follows. 5. In each VirtualHost, you use DocumentRoot, ServerName, and ServerAlias. Use ServerAlias liberally. 6. For the that you want to *always* be secure, 6a. List the secure site first (this is important!) 6b. List the insecure site immediately below. Take out the DocumentRoot and replace it with this: Redirect permanent / https://(name of site)/ Here's an example: NameVirtualHost *:80 NameVirtualHost *:443 # ############################################################ # foobar: my own sandbox for education and web development # DocumentRoot /some/directory/foo ServerName ServerAlias foo ServerAlias ServerAlias ServerAdmin SSLEngine on SSLCertificateFile /path/to/the/server.crt SSLCertificateKeyFile /path/to/the/server.key # Require https:// to get in here SSLRequire %{SSL_CIPHER_USEKEYSIZE} >= 128 Options Includes FollowSymLinks AllowOverride All Order allow,deny Allow from all # # handles unsecure requests to foobar # #sends port 80 requests to a to https:// #DocumentRoot /some/directory/foo ServerName ServerAlias foo ServerAlias ServerAlias ServerAdmin Redirect permanent /

This is really helpful. Thanks much for posting it. Fob