May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

WEB Servers

News WWW  Recommended Links Cheap Web hosting with SSH access Apache Webserver Perl HTTP Servers WEB Hosting
Tiny servers HTTP Protocol HTTP Return Codes HTTP Servers Log Analyses Blocking Referer Spam Requests for non-existing web pages PHP probes
Server security Proxy CGI Microsoft Personal Web Server  Sysadmin Horror Stories  Humor Etc

The smallest web server available is probably nweb  which is only 200 lines of C code (static pages only).

HTTP::Server::Simple  is a simple webs server written in Perl. It has no non-core module dependencies. It's suitable for building a standalone HTTP-based UI to your existing tools.

For Python there is Python Community Server

Windows XP Professional contains Internet Information Services (IIS) version 5.1. IIS 5.1 includes Web and FTP server support, as well as support for Microsoft FrontPage transactions, Active Server Pages, and database connections. Available as an optional component, IIS 5.1 is installed automatically if you upgrade from versions of Windows with PWS installed.

Apache can be installed with Cygwin.


Top Visited
Past week
Past month


Old News

[Jun 12, 2021] Nginx is Now the World s #1 Web Server, Overtaking Apache by Bobby Borisov

Jun 11, 2021 |

W3Techs announced that after many years of steady growth in market share, Nginx is now the most popular web server in the world, edging out Apache HTTP Server.

Back in 2009, Nginx had a market share of 3.7%, Apache had over 73% and Microsoft-IIS had around 20%, but the web server field today has changed significantly. According to Netcraft's statistics , now Nginx is leading with just over one third of the market, at 33.8%. Apache is basically head-to-head at the moment, but declining. The gap between Apache and Nginx was still 6.6% one year ago.

In addition to, according to the W3Techs' statistics , the top 3 web servers are Nginx (34.1%), Apache (33.2%), and Cloudflare Server (18.7%). The Cloudflare Server at rank 3 is particularly interesting in that context, as it is derived from Nginx.

Nginx has dominated the high-traffic part of the market for a long time. It became the most used web server among the top 1000 sites in 2013, and that hasn't changed since then. It is now used by 47.1% of the top 1000 sites and by 44.6% of the top 10k sites, clearly ahead of the competition. It is gaining market share at the moment mostly from Apache and Microsoft-IIS, but at the same time it is losing sites to Cloudflare Server and to LiteSpeed Web Server.

Congratulations to Nginx to reaching this milestone. With so many websites and companies relying on its performance and stability, it certainly has become a very significant part of the internet infrastructure.

The History of Nginx

Originally developed in Russia, the original motivation for creating Nginx wasn't nearly so grand. Back in 2001, Nginx's original creator Igor Sysoev was trying to solve a problem at work. His web servers were having trouble keeping up with ever"'increasing numbers of requests. The challenge was referred to at the time as the C10K problem "" handling 10,000 simultaneous client connections to clients.

Inspired by the design of Unix and other classic distributed systems, Igor developed an event"'driven architecture that is so lightweight, scalable, and powerful it's still at the heart of Nginx today.

Nginx is built to offer low memory usage and high concurrency. Rather than creating new processes for each web request, it uses an asynchronous, event-driven approach where requests are handled in a single thread.

Use Cases

Though Nginx became famous as the fastest web server, the scalable underlying architecture has proved ideal for many web tasks beyond serving content. Because it can handle a high volume of connections, Nginx is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers "" anything from legacy database servers to microservices.

NGINX also is frequently placed between clients and a second web server, to serve as an SSL/TLS terminator or web accelerator .

Dynamic sites, built using anything from Node.js to PHP, commonly deploy Nginx as a content cache and reverse proxy to reduce load on application servers and make the most effective use of the underlying hardware. One popular combination, for example, is to use it to route requests to FastCGI servers which run applications built with various frameworks and programming languages such as PHP. Here you can find, how to configure Nginx to work with PHP via PHP-FPM .

[Jun 12, 2021] A beginner s guide to creating redirects in an .htaccess file - Enable Sysadmin

Jun 10, 2021 |

A beginner's guide to creating redirects in an .htaccess file Use the .htaccess file to manage web sites on shared web hosting platforms.

Posted: June 9, 2021 | by Abdul Rehman

Image by Reginal from Pixabay

Have you ever felt a need to change the configuration of your website running on an Apache webserver without having root access to server configuration files ( httpd.conf )? This is what the .htaccess file is for.

The .htaccess file provides a way to make configuration changes to your website on a per-directory basis. The file is created in a specific directory that contains one or more configuration directives that are applied to that directory and its subdirectories. In shared hosting, you will need to use a .htaccess file to make configuration changes to your server.

[ You might also enjoy: 6 sysadmin skills web developers need ]

Common uses of .htaccess file

The .htaccess file has several use cases. The most common examples include:

In the cloud When not to use .htaccess?

The .htaccess file is commonly used when you don't have access to the main server configuration file httpd.conf or virtual host configuration, which only happens if you have purchased shared hosting. You can achieve all of the above-mentioned use cases by editing the main server configuration file(s) (e.g., httpd.conf ) or virtual host configuration files, so you should not use .htaccess when you have access to those files. Any configuration that you need to put in a .htaccess file can just as effectively be added in a <Directory> section in your main server or virtual host configuration files.

Reasons to avoid using .htaccess

There are two reasons to avoid the use of .htaccess files. Let's take a closer look at them.

First : Performance - When AllowOverride is set to allow the use of .htaccess files, httpd will look for .htaccess files in every directory starting from the parent directory. This will cause a performance impact, whether you're using it or not. The .htaccess file is loaded every time a document is requested from a directory.

To have a full view of the directives that it must apply, httpd will always look for .htaccess files starting with the parent directory until it reaches the target sub-directory. If a file is requested from directory /public_html/test_web/content , httpd must look for the following files:

So, four file-system accesses were performed for each file access from a sub-directory content even if the file is not present.

Second : Security - granting users permission to make changes in .htaccess files gives them full control over the server configuration of that particular website or virtual host. Any directive in the .htaccess file has the same effect as any placed in the httpd configuration file itself, and changes made to this file are live instantly without a need to restart the server. This can become risky in terms of the security of a webserver and a website.

Enable the .htaccess file

To enable the .htaccess file, you need to have sudo/root privileges on the server.

Open the httpd configuration file of your website:


You should add the following configuration directive in the server's virtual host file to allow the .htaccess file in the DocumentRoot directory. If the following lines are not added, the .htaccess file will not work:

<Directory /var/www/>
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted

In the case of shared hosting, this is already allowed by the hosting service providers. All you need to do is to create a .htaccess file in the public_html directory to which the service provider has given you access and to which you will upload your website files.

Redirect URLs

If your goal is to simply redirect one URL to another, the Redirect directive is the best option you can use. Whenever a request comes from a client on an old URL, it forwards it to a new URL at a new location.

If you want to do a complete redirect to a different domain, you can set the following:

# Redirect to a different domain
Redirect 301 "/service" ""

If you just want to redirect an old URL to a new URL on the same host:

# Redirect to a URL on the same domain or host
Redirect 301 "/old_url.html" "/new_url.html"
Load a custom 404 Error page
Kubernetes and OpenShift

For a better user experience, load a custom error page when any of the links on your website point to the wrong location or the document has been deleted.

To create a custom 404 page, simply create a web page that will work as a 404 page and then add the following code to your .htaccess file:

ErrorDocument 404 /error/pagenotfound.html

You should change /error/pagenotfound.html to the location of your 404 page.

Force the use of HTTPS instead of HTTP for your website

If you want to force your website to use HTTPS, you need to use the RewriteEngine module in the .htaccess file. First of all, you need to turn on the RewriteEngine module in the .htaccess file and then specify the conditions you want to check. If those conditions are satisfied, then you apply rules to those conditions.

The following code snippet rewrites all the requests to HTTPS:

# Turn on the rewrite engine
RewriteEngine On

# Force HTTPS and WWW
RewriteCond %{HTTP_HOST} !^www\.(.*)$ [OR,NC]
RewriteCond %{https} off  
RewriteRule ^(.*)$$1 [R=301,L]

Let's go through each line.

RewriteEngine on turns on the RewriteEngine module. This is required; otherwise, conditions and rules won't work.

The first condition checks if www is entered. [OR, NC] stands for no case , which means even if the entered URL has a mix of upper or lowercase case letters.

Next, it checks if the HTTPS protocol was already entered by the user. %{https} off means that HTTPS protocol was not used.

When the RewriteCond is satisfied, we use RewriteRule to redirect the URL to HTTPS. Note that in this case, all URLs will be redirected to HTTPS whenever any request is made.

[Aug 02, 2020] Lighttpd Web Server

Aug 02, 2020 |

Lighttpd is a free and opensource web server that is specifically designed for speed-critical applications. Unlike Apache and Nginx , it has a very small footprint (less than 1 MB ) and is very economical with the server's resources such as CPU utilization.

Distributed under the BSD license, Lighttpd runs natively on Linux/Unix systems but can also be installed in Microsoft Windows. It's popular for its simplicity, easy set-up, performance, and module support.

Lighttpd's architecture is optimized to handle a large volume of parallel connections which is crucial for high-performance web applications. The web server supports FastCGI , CGI , and SCGI for interfacing programs with the webserver. It also supports web applications written in a myriad of programming languages with special attention given to PHP , Python , Perl , and Ruby .

Other features include SSL/TLS support, HTTP compression using the mod_compress module, virtual hosting, and support for various modules.

[Aug 01, 2020] Nginx Web Server

Aug 01, 2020 |

Pronounced as Engine-X , Nginx is an opensource high-performance robust web server which also double-ups as a load balancer , reverse proxy, IMAP/POP3 proxy server, and API gateway. Initially developed by Igor Sysoev in 2004, Nginx has grown in popularity to edge out rivals and become one of the most stable and reliable web servers.

Nginx draws its prominence from its low resource utilization, scalability, and high concurrency. In fact, when properly tweaked, Nginx can handle up to 500,000 requests per second with low CPU utilization. For this reason, it's the most ideal web server for hosting high-traffic websites and beats Apache hands down.

Popular sites running on Nginx include LinkedIn , Adobe , Xerox , Facebook , and Twitter to mention a few.

Nginx is lean on configurations making it easy to make tweaks and Just like Apache , it supports multiple protocols, SSL/TLS support, basic HTTP authentication , virtual hosting , load balancing, and URL rewriting to mention a few. Currently, Nginx commands a market share of 31% of all the websites hosted.

[Oct 21, 2017] Apache2 mod_rewrite and %{REQUEST_FILENAME} - Sysadmandine

February 23, 2010 admin
Notable quotes:
"... I must admit I read the description for REQUEST_FILENAME in apache2.2 several times before noticing that it was just the answer too used to read too fast! Thanks to this old post that made me re-read slower ! ..."
Oct 21, 2017 |

RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule ^/(.*)$ /index.php?rt=$1 [L,QSA]

This means : if the requested file is not a real file, and isn't a directory, and isn't a symlink, then redirect to index.php.

I was really surprised to discover that it doesn't work. Though, everybody seems to use this syntax ! I checked my apache version : Apache/2.2.9 (Debian), nothing special with this one I guess.
To understand what Apache was doing with my rewrites, I activated the rewrite log :

RewriteLog /var/log/apache2/rewrite.log

Here's what I got (the interesting part, cause I got a looot more !) :

[blah blah blah] (2) init rewrite engine with requested uri /toto.htm
[blah blah blah] (3) applying pattern '^/(.*)$' to uri '/toto.htm'
[blah blah blah] (4) RewriteCond: input='/toto.htm' pattern='!-f' =&gt; matched
[blah blah blah] (4) RewriteCond: input='/toto.htm' pattern='!-d' =&gt; matched
[blah blah blah] (4) RewriteCond: input='/toto.htm' pattern='!-l' =&gt; matched
[blah blah blah] (2) rewrite '/toto.htm' -&gt; '/index.php?rt=toto.htm'

So apaches verifies only '/toto.htm' and not the whole path for "%{REQUEST_FILENAME}"? I thought though it was the whole path let's verify in the doc.
From , by habit (cause I used apache 2.0 a lot more than apache 2.2 from now on) :

REQUEST_FILENAME : The full local filesystem path to the file or script matching the request.

Hmm. But I use apache version 2.2, so what do they say here :

REQUEST_FILENAME : The full local filesystem path to the file or script matching the request, if this has already been determined by the server at the time REQUEST_FILENAME is referenced. Otherwise, such as when used in virtual host context, the same value as REQUEST_URI.


REQUEST_URI : The resource requested in the HTTP request line. (In the example above, this would be "/index.html".)

Ok, I understand, I use virtual hosts (like everybody, uh?), so the real syntax for my needs is :

RewriteRule ^/(.*)$ /index.php?rt=$1 [L,QSA]

This works even if it doubles the "/" between each variable (one / at the end of DOCUMENT_ROOT, and another at the beginning of REQUEST_FILENAME).

Here's the rewrite log showing that it works :

[blah blah blah] (2) init rewrite engine with requested uri /toto.htm
[blah blah blah] (3) applying pattern '^/(.*)$' to uri '/toto.htm'
[blah blah blah] (4) RewriteCond: input='/path/to/documentroot//toto.htm' pattern='!-f' =&gt; not-matched
[blah blah blah] (1) pass through /toto.htm

Now I can disable this log if I want to keep space on my disk.

I must admit I read the description for REQUEST_FILENAME in apache2.2 several times before noticing that it was just the answer too used to read too fast! Thanks to this old post that made me re-read slower ! 😉

[Feb 19, 2017] Caddy - A Lightweight HTTP-2 Web Server to Deploy and Test Websites Easily

Feb 19, 2017 |
A web server is a Server side application designed to process HTTP requests between client and server. HTTP is the basic and very widely used network protocol. We all would be familiar with Apache HTTP Server.

Apache HTTP Server played an important role in designing what web is today. It alone has a market share of 38% . Microsoft IIS comes second in the list having a market share of 34% . Nginx and Google's GWS comes at number 3 and 4 having a market share of 15% and 2% respectively.

Last day I came across a web server named Caddy . When I tried to inquire it's features and deployed it to testing, I must say it is amazing. A web server which is portable and do not need any configuration file. I though it is a very cool project and wanted to share it with you. Here we have given Caddy a try!

What is Caddy?

Caddy is an alternative web server easy to configure and use. Matt Holt – The Project leader of Caddy claims that Caddy is a general-purpose web server, claims to be designed for human and it is probably the only of its kind.

Features of Caddy
  1. Speedy HTTP requests using HTTP/2.
  2. Capable Web Server with least configuration and hassle free deployment.
  3. TLS encryption ensure, encryption between communicating applications and user over Internet. You may use your own keys and certificates.
  4. Easy to deploy/use. Just one single file and no dependency on any platform.
  5. No installation required.
  6. Portable Executables.
  7. Run on multiple CPUs/Cores.
  8. Advanced WebSockets technology – interactive communication session between browser and server.
  9. Server Markdown documents on the fly.
  10. Full support for latest IPv6.
  11. Creates log in custom format.
  12. Serve FastCGI, Reverse Proxy, Rewrite and Redirects, Clean URL, Gzip compression, Directory Browsing, Virtual Hosts and Headers.
  13. Available for All known Platform – Windows, Linux, BSD, Mac, Android.
What make caddy Different?
  1. Caddy aims at serving web as it should be in the year 2017 and not traditional style.
  2. It is designed not only to serve HTTP request but also human.
  3. Loaded with Latest features – HTTP/2, IPv6, Markdown, WebSockets, FastCGI, templates and other out-of-box features.
  4. Run the executables without the need of Installing it.
  5. Detailed documentation with least technical description.
  6. Developed keeping in mind the need and ease of Designers, Developers and Bloggers.
  7. Support Virtual Host – Define as many sites as you want.
  8. Suited for you – no matter if your site is static or dynamic. If you are human it is for you.
  9. You focus on what to achieve and not how to achieve.
  10. Availability of support for most number of platforms – Windows, Linux, Mac, Android, BSD.
  11. Usually, you have one Caddy file per site.
  12. Set up in less than 1 minute, even if you are not that much computer friendly.

[ Dec 01, 2015 ] How to install Nginx as Reverse Proxy for Apache on Ubuntu 15.10

  1. Step 1 - Install Apache and PHP
  2. Step 2 - Configure Apache and PHP
  3. Step 3 - Install Nginx
  4. Step 4 - Configure Nginx
  5. Step 5 - Configure Logging
  6. Conclusion

Nginx or "engine-x" is a high-performance web server with low memory usage, created by Igor Sysoev in 2002. Nginx is not just a web server, it can be used as a reverse proxy for many protocols like HTTP, HTTPS, POP3, SMTP, and IMAP and as a load balancer and HTTP cache as well.

Apache is the most popular web server software maintained by the open source community under Apache Software Foundation. There are many add-on modules available for apache like WebDav support or web application firewalls like mod_security and it supports many web programming languages like Perl, Python, PHP trough native modules or by cgi, fcgi and FPM interfaces.

In this tutorial, I will install and configure Nginx as a caching reverse proxy for an Apache web server on Ubuntu 15.10, Nginx is used as the front end and Apache as the back end. Nginx will run on port 80 to respond to requests from a user/browser, the request will then be forwarded to the apache server that is running on port 8080.

[Nov 19, 2009] nweb a tiny, safe Web server (static pages only)

See also micro_httpd which implements all the basic features of an HTTP server in only 150 lines of code.
Have you ever wanted to run a tiny, safe Web server without worrying about using a fully blown Web server that could cause security issues? Do you wonder how to write a program that accepts incoming messages with a network socket? Have you ever just wanted your own Web server to experiment and learn with?

Well, look no further -- nweb is what you need. This is a simple Web server that has only 200 lines of C source code. It runs as a regular user and can't run any server-side scripts or programs, so it can't open up any special privileges or security holes.

This article covers:

nweb only transmits the following types of files to the browser :

If your favorite static file type is not in this list, you can simply add it in the source code and recompile to allow it.

[Nov 18, 2009] nginx by Igor Sysoev

nginx [engine x] is a HTTP server and mail proxy server written by me (Igor Sysoev).

nginx has been running for more than five years on many heavily loaded Russian sites including Rambler (
In March 2007 about 20% of all Russian virtual hosts were served or proxied by nginx.
According to Google Online Security Blog in June 2007 ago nginx served or proxied about 4% of all Internet virtual hosts.
2 of Alexa US Top100 sites use nginx in March 2008.
According to Netcraft in December 2008 nginx served or proxied 3.5 millions virtual hosts. And now it is on 3rd place (not counting in-house Google server) and ahead of lighttpd.
According to Netcraft in March 2009 nginx served or proxied 3.06% busiest sites.
According to Netcraft in May 2009 nginx served or proxied 3.25% busiest sites.
Here are some of success stories: FastMail.FM,

Security patches:

Development versions are nginx-0.8.27, nginx/Windows-0.8.27, the change log.
The latest stable versions are nginx-0.7.64, nginx/Windows-0.7.64, the change log.
The latest legacy stable version is nginx-0.6.39, the change log.
The latest legacy version is nginx-0.5.38, the change log.
The sources are licensed under 2-clause BSD-like license.

English Resources:

The Russian documentation.

Basic HTTP features:

Mail proxy server features:

Tested OS and platforms:

Architecture and scalability:

Other HTTP features:

Experimental features:

[Jul 16, 2008] httping 1.2.9 by Folkert van Heusden

About: httping is a "ping"-like tool for HTTP requests. Give it a URL and it will show how long it takes to connect, send a request, and retrieve the reply (only the headers). It can be used for monitoring or statistical purposes (measuring latency).

Changes: Binding to an adapter did not work and "SIGPIPE" was not handled correctly. Both of these problems were fixed.

[Feb 15, 2008] nginx 0.5.35 (Stable) by nuut

About: nginx is an HTTP server and mail proxy server. It has been running for more than two years on many heavily loaded Russian sites, including Rambler ( In March 2007, about 20% of all Russian virtual hosts were served or proxied by nginx.

Changes: The STARTTLS in SMTP mode is now working. In HTTPS mode, some requests fail with a "bad write retry" error. The "If-Range" request header line is now supported. uname(2) is now used on Linux systems instead of procfs.

[Nov 25, 2006] David's blog/FastCGI becoming the new leader in server technologies?

Until now FastCGI was behind mod_php, java and mod_perl in terms of popularity among web server administrators and web developers. But times have changed and changed for good.

In the early days of web development when the CGI interface was the leader and web servers were quite slow, developers felt that they needed a faster server technology, that can be used to run their web applications on high-traffic web sites. The solution to the problem seemed obvious – the developers had to take their CGI-based code and put it into the web server process.

With this solution, the operating system didn't have to start a new process every time a request had been received, which is very expensive, and you could write your application with a persistent functionality in mind and ability to cache data between several different http requests.

These were the days when some of the most popular web server APIs were born – Internet Information Server's ISAPI, Netscape Server's NSAPI, and Apache's module API. This trend created some of the best known and quite often used technologies in web development like mod_php, mod_python, java servlets (and later jsp), asp. But the conception that stays behind these technologies is not flawless. There are many problems with applications that run inside your average web server.

For example mod_perl's high memory usage per child process can suck the available ram, php's problems with threads can kill the whole web server, and many security problems arising from the fact that the most popular web server (Apache) can't do simple things like changing the OS user it executes the request with. For quite some time there have been solutions, like putting a light-weight proxy server in front of apache, installing third-parity software for IIS or using php's safe mode and OpenBasedir (Oh GOD!) on apache, but these are not elegant and pose other problems on their own. Also the hardware progress in the last few years made the server modules obsolete.

In the mean time, when the server modules were gaining glory and fame, a little-known technology with a different conception and implementation was born. It was called FastCGI and the basic problem it was designed to solve was to make CGI programs run faster. Later, it became clear that FastCGI solves many other problems and design flaws that the server modules had.

How FastCGI works?
FastCGI runs in the web server process, but doesn't handle the request itself. Instead it manages a pool of the so-called FastCGI servers outside of the web server process and when a request arrives, the FastCGI manager sends the http data through a socket to one of the available fastcgi servers to handle this request. This strategy is quite simple and has the following advantages:

In the beginning FastCGI was not so popular, because its use of external processes and communication through sockets required more resources to be allocated on the host system. Today this is not the case, because for the last few years the hardware development made huge leaps ahead and system memory is not so expensive anymore. In present days many of the web servers have full support for FastCGI and the trend is to migrate the current web applications to run under it. These are some of the most popular web servers that have support for FastCGI: In November Microsoft announced support for FastCGI on IIS 5, IIS 6 and IIS 7 (Beta). Click here to read the announcement.

Table of Contents Plexus HTTP -- Perl-based server


Recently I needed a test-bed for scripts that generate HTML, and file access didn't suffice. As usually, the first attempt at awk-ing the request from a socket(1) executed script soon grew big and ugly with ever more special cases being added. As the need to support ISINDEX and FORMs came up, I re-wrote the whole thing in perl. Now the simple test aid has become a program that can be used in similar situations when you need an HTTP server quickly without worrying to install a CERN or NCSA server.

This one does not have all the features needed; in particular, it knows only about text, HTML and GIF files and the support for CGI scripts is limited (just enough to check if they work and produce correct output). It supports only HTTP 1.0 and only GET and POST requests.

O'Reilly Network: Using Squid on Intermittent Connections

(Aug 5, 2001, 18:00 UTC) (742 reads) (0 talkbacks) (Posted by mhall)
Dialup connections can be frustrating. Squid, a very popular piece of 'net caching software does a lot to cut bandwidth demands, but it isn't built for dialups. This article shows how to change that.

Apache Today - Web Servers of the Fortune 500 A Dissection and Analysis IIS if the king of large company Webserver farms with iPlanet is a distant second.

I was minding my own business, checking my snail mail at the office, when all of a sudden I was assaulted: "IIS Most Used Web Server Among Fortune 500 Sites" slapped me upside the head like a two-liter shot of Mountain Dew. For those of you who haven't read the cover story of Volume 5 Number 10 of ENT or seen the article on their website--go do that first, and then come back.

After recovering from what I though must have been wrong, biased marketing research, I set out to prove ENT wrong


I set about this study with a mission: To objectively collect data on the "brochure sites" of the Fortune 500. My secondary objective, of course, was to disprove the ENT study. My results were almost identical to theirs, however. If you look at the entire Fortune 500, from General Motors all the way to ReliaStar Financial, IIS reigns king. If you, however, look at subsets of the Fortune 500 and the types of companies represented, the picture is much different. Netscape Enterprise Server dominates until the Fortune 300 is looked at as an aggragate, where both Netscape and Microsoft share 41 percent of the market. This information was embedded in the ENT article as well.

Some Apache-related news

The C10K problem -- very interesting info on various servers performance issues

Web Servers Feature Chart contains an interesting table on features of virtually all the Web Server software packages.

Netcraft's Web Server Survey shows the market share of different Web Server software on Internet connected computers.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


W3C Open Source Releases


Serving Up Web Server Basics

Perl, Sockets and TCP-IP Networking.

Network Programming in Perl - a well-written introduction to network programming with practical examples.

The documentation of the Socket and IO::Socket modules that comes with your perl distribution should be a valuable reference.

filehandle multiplexing with select() describe a method to manage multiple sockets on the same thread.

Recommended Papers

Server Security

Perl servers

HTTPDaemon - a simple http server class

Table of Contents Plexus HTTP -- Perl-based server


Recently I needed a test-bed for scripts that generate HTML, and file access didn't suffice. As usually, the first attempt at awk-ing the request from a socket(1) executed script soon grew big and ugly with ever more special cases being added. As the need to support ISINDEX and FORMs came up, I re-wrote the whole thing in perl. Now the simple test aid has become a program that can be used in similar situations when you need an HTTP server quickly without worrying to install a CERN or NCSA server.

This one does not have all the features needed; in particular, it knows only about text, HTML and GIF files and the support for CGI scripts is limited (just enough to check if they work and produce correct output). It supports only HTTP 1.0 and only GET and POST requests.

New Architect Make the Simple Things Simple

An HTTP server (a Web server) is an application that "listens" for connection requests from client processes (usually on other machines). Upon receiving a request, the server creates a new connection for the client, and then goes back to listening for other requests. This new connection is created in its own process on the server because the act of waiting for a connection is usually a "blocking" action during which no other processing can take place. As I'll show you later, there are some obstacles and detours when building such an application with Win32 Perl.

A lot of the pain in writing an HTTP server can be eased by using the libwww-perl library for Perl. For those of you who have used it before, Listing 1 may look a little odd. I'm actually using the new LWPng module, which supports many HTTP 1.1 features, including persistent connections (see the sidebar titled " Using LWPng").

A persistent connection maintains an open communication channel between the client and server until one side or the other forces the connection to be closed. Using such a connection, it's possible to create a Win32 server process that can fork and maintain communications with the client. Then we can establish several clients talking to the same server, each one in a conversational loop.

The scenario plays out like this: An HTTP daemon (we'll use the HTTP::Daemon module) is set up to listen for connection requests on a specified port. Clients (such as LWP::UA) contacting the server know they can expect to find an HTTP daemon listening at this socket, so they send a valid HTTP request (LWP::Request). The server binds the local and remote sockets to form a connection (HTTP::Daemon::ClientConn). Once the connection is established, bidirectional communication can take place. At this point, the server can close the connection, or start a new process on the server to handle communication with the client.

This is the moment of truth-the server needs to do three things to successfully hand off the client to another process:

  1. Create the new process
  2. Transfer the open connection to the new process
  3. Let the new process access the parent process "environment"

The fork command in UNIX is used to start a new process. It does this by creating a child process that is a clone of the calling, or parent, process. The new process not only shares the open connections, but the same data space, call stack, and memory heap as well. Both the parent and child processes continue executing the code that follows the fork command. If we want the child process to take on a new identity, we can call exec, which replaces the running process with a new program, usually pulled from a disk file.

Win32 programmers have a rougher go at it, because the Win32 process model differs greatly from UNIX. First, Win32 has no concept of "parent" or "child" processes. One process can create another, but they are essentially peers. The Win32 architecture doesn't allow the creation of a new process that shares the environment of another process. For example, each new process in Win32 maintains its own instances of referenced DLLs. The Win32 API call CreateProcess essentially combines the fork and exec functions of UNIX into a single action called a spawn. (For more information about Win32 processes in a Perl context, see Win32 Perl Programming: The Standard Extensions by Dave Roth.)

However, the usefulness of the fork hasn't been lost on the Win32 crowd, and there are emulations available for those who want to compile Perl using Cygwin32, or wait for ActiveState's next major release of its Win32 Perl implementation. It should be noted that these solutions are emulations of a core UNIX function that is simply not supported at the operating system level. The implementation-specific solution I came up with is just that-specific to a Perl HTTP::Daemon in Win32. With the disclaimers out of the way, let's look at the code.



O'Reilly Network: Using Squid on Intermittent Connections

(Aug 5, 2001, 18:00 UTC) (742 reads) (0 talkbacks) (Posted by mhall)
Dialup connections can be frustrating. Squid, a very popular piece of 'net caching software does a lot to cut bandwidth demands, but it isn't built for dialups. This article shows how to change that.


Web Caching Documentation
In an effort to make users (web publishers) more aware of the issues and benefits of designing for Web caches, instead of trying to circumvent them, there is now a document that explains the why, what and how of caching in (hopefully) easy-to-understand language. It's particularly important to get this document to an audience of high-volume Web sites and hosting services.
Mark Nottingham @ 11/19/98 - 11:41 EST



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: August 02, 2020