The smallest web server available is probably
nweb which is only 200 lines of C code
(static pages only).
HTTP::Server::Simple is a simple webs server written in Perl.
It has no non-core module dependencies. It's suitable for building
a standalone HTTP-based UI to your existing tools.
Windows XP Professional contains Internet Information Services (IIS)
version 5.1. IIS 5.1 includes Web and FTP server support,
as well as support for
Microsoft FrontPage transactions, Active Server Pages, and database connections.
Available as an optional component, IIS 5.1 is installed automatically if you upgrade
from versions of Windows with PWS installed.
W3Techs announced that after many years of steady growth in market share, Nginx is now the most popular web server in the world,
edging out Apache HTTP Server.
Back in 2009, Nginx had a market share of 3.7%, Apache had over 73% and Microsoft-IIS had around 20%, but the web server field
today has changed significantly. According to
Netcraft's statistics , now Nginx is
leading with just over one third of the market, at 33.8%. Apache is basically head-to-head at the moment, but declining. The gap
between Apache and Nginx was still 6.6% one year ago.
Have you ever felt a need to change the
configuration of your website running on an Apache webserver without having root access to server configuration
files (
httpd.conf
)?
This is what the
.htaccess
file
is for.
The
.htaccess
file
provides a way to make configuration changes to your website on a per-directory basis. The file is created in a
specific directory that contains one or more configuration directives that are applied to that directory and its
subdirectories. In shared hosting, you will need to use a
.htaccess
file
to make configuration changes to your server.
The
.htaccess
file
is commonly used when you don't have access to the main server configuration file
httpd.conf
or
virtual host configuration, which only happens if you have purchased shared hosting. You can achieve all of the
above-mentioned use cases by editing the main server configuration file(s) (e.g.,
httpd.conf
)
or virtual host configuration files, so you should not use
.htaccess
when
you have access to those files. Any configuration that you need to put in a
.htaccess
file
can just as effectively be added in a
<Directory>
section
in your main server or virtual host configuration files.
Reasons to avoid using .htaccess
There are two reasons to avoid the use
of
.htaccess
files.
Let's take a closer look at them.
First
: Performance - When
AllowOverride
is
set to allow the use of
.htaccess
files,
httpd
will
look for
.htaccess
files
in every directory starting from the parent directory. This will cause a performance impact, whether you're
using it or not. The
.htaccess
file
is loaded every time a document is requested from a directory.
To have a full view of the directives
that it must apply,
httpd
will
always look for
.htaccess
files
starting with the parent directory until it reaches the target sub-directory. If a file is requested from
directory
/public_html/test_web/content
,
httpd
must
look for the following files:
/.htaccess
/public_html/.htaccess
/public_html/test_web/.htaccess
/public_html/test_web/content/.htaccess
So, four file-system accesses were
performed for each file access from a sub-directory content even if the file is not present.
Second
: Security - granting users
permission to make changes in
.htaccess
files
gives them full control over the server configuration of that particular website or virtual host. Any directive
in the
.htaccess
file
has the same effect as any placed in the
httpd
configuration
file itself, and changes made to this file are live instantly without a need to restart the server. This can
become risky in terms of the security of a webserver and a website.
Enable the .htaccess file
To enable the
.htaccess
file,
you need to have sudo/root privileges on the server.
Open the
httpd
configuration
file of your website:
/etc/httpd/conf/test.conf
You should add the following
configuration directive in the server's virtual host file to allow the
.htaccess
file
in the
DocumentRoot
directory. If the
following lines are not added, the
.htaccess
file
will not work:
</VirtualHost>
<Directory /var/www/test.com/public_html>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
In the case of shared hosting, this is
already allowed by the hosting service providers. All you need to do is to create a
.htaccess
file
in the
public_html
directory
to which the service provider has given you access and to which you will upload your website files.
Redirect URLs
If your goal is to simply redirect one
URL to another, the
Redirect
directive is
the best option you can use. Whenever a request comes from a client on an old URL, it forwards it to a new URL
at a new location.
If you want to do a complete redirect to
a different domain, you can set the following:
# Redirect to a different domain
Redirect 301 "/service" "https://newdomain.com/service"
If you just want to redirect an old URL
to a new URL on the same host:
# Redirect to a URL on the same domain or host
Redirect 301 "/old_url.html" "/new_url.html"
Load a custom 404 Error page
For a better user experience, load a
custom error page when any of the links on your website point to the wrong location or the document has been
deleted.
To create a custom 404 page, simply
create a web page that will work as a 404 page and then add the following code to your
.htaccess
file:
ErrorDocument 404 /error/pagenotfound.html
You should change
/error/pagenotfound.html
to
the location of your 404 page.
Force the use of HTTPS instead of HTTP for your website
If you want to force your website to use
HTTPS, you need to use the
RewriteEngine
module
in the
.htaccess
file.
First of all, you need to turn on the
RewriteEngine
module
in the
.htaccess
file
and then specify the conditions you want to check. If those conditions are satisfied, then you apply rules to
those conditions.
The following code snippet rewrites all
the requests to HTTPS:
# Turn on the rewrite engine
RewriteEngine On
# Force HTTPS and WWW
RewriteCond %{HTTP_HOST} !^www\.(.*)$ [OR,NC]
RewriteCond %{https} off
RewriteRule ^(.*)$ https://www.test-website.com/$1 [R=301,L]
Let's go through each line.
RewriteEngine on
turns on the
RewriteEngine
module.
This is required; otherwise, conditions and rules won't work.
The first condition checks if
www
is
entered.
[OR, NC]
stands for
no
case
, which means even if the entered URL has a mix of upper or lowercase case letters.
Next, it checks if the HTTPS protocol
was already entered by the user.
%{https} off
means
that HTTPS protocol was not used.
When the
RewriteCond
is
satisfied, we use
RewriteRule
to redirect
the URL to HTTPS. Note that in this case, all URLs will be redirected to HTTPS whenever any request is made.
Lighttpd is a free and opensource
web server that is specifically designed for speed-critical applications. Unlike Apache and
Nginx , it has a very small footprint (less than 1 MB ) and is very economical with the
server's resources such as CPU utilization.
Distributed under the BSD license, Lighttpd runs natively on Linux/Unix systems but can also
be installed in Microsoft Windows. It's popular for its simplicity, easy set-up, performance,
and module support.
Lighttpd's architecture is optimized to handle a large volume of parallel connections which
is crucial for high-performance web applications. The web server supports FastCGI , CGI , and
SCGI for interfacing programs with the webserver. It also supports web applications written in
a myriad of programming languages with special attention given to PHP , Python , Perl , and
Ruby .
Other features include SSL/TLS support, HTTP compression using the mod_compress module,
virtual hosting, and support for various modules.
Pronounced as Engine-X , Nginx is an
opensource high-performance robust web server which also double-ups as a load balancer ,
reverse proxy, IMAP/POP3 proxy server, and API gateway. Initially developed by Igor Sysoev in
2004, Nginx has grown in popularity to edge out rivals and become one of the most stable and
reliable web servers.
Nginx draws its prominence from its low resource utilization, scalability, and high
concurrency. In fact, when properly tweaked, Nginx can handle up to 500,000 requests per second
with low CPU utilization. For this reason, it's the most ideal web server for hosting
high-traffic websites and beats Apache hands down.
Popular sites running on Nginx include LinkedIn , Adobe , Xerox , Facebook , and Twitter to
mention a few.
Nginx is lean on configurations making it easy to make tweaks and Just like Apache , it
supports multiple protocols, SSL/TLS support, basic HTTP authentication
, virtual
hosting , load balancing, and URL rewriting to mention a few. Currently, Nginx commands a
market share of 31% of all the websites hosted.
"... I must admit I read the description for REQUEST_FILENAME in apache2.2 several times before noticing that it was just the answer too used to read too fast! Thanks to this old post that made me re-read slower ! ..."
This means : if the requested file is not a real file, and isn't a directory, and isn't a
symlink, then redirect to index.php.
I was really surprised to discover that it doesn't work. Though, everybody seems to use
this syntax ! I checked my apache version : Apache/2.2.9 (Debian), nothing special with this
one I guess.
To understand what Apache was doing with my rewrites, I activated the rewrite log
:
So apaches verifies only '/toto.htm' and not the whole path for "%{REQUEST_FILENAME}"? I
thought though it was the whole path let's verify in the doc.
From http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html
, by habit (cause I used apache 2.0 a lot more than apache 2.2 from now on) :
REQUEST_FILENAME : The full local filesystem path to the file or script matching the
request.
REQUEST_FILENAME : The full local filesystem path to the file or script matching the
request, if this has already been determined by the server at the time REQUEST_FILENAME is
referenced. Otherwise, such as when used in virtual host context, the same value as
REQUEST_URI.
Ow.
REQUEST_URI : The resource requested in the HTTP request line. (In the example above,
this would be "/index.html".)
Ok, I understand, I use virtual hosts (like everybody, uh?), so the real syntax for my
needs is :
This works even if it doubles the "/" between each variable (one / at the end of
DOCUMENT_ROOT, and another at the beginning of REQUEST_FILENAME).
Here's the rewrite log showing that it works :
[blah blah blah] (2) init rewrite engine
with requested uri /toto.htm
[blah blah blah] (3) applying pattern '^/(.*)$' to uri '/toto.htm'
[blah blah blah] (4) RewriteCond: input='/path/to/documentroot//toto.htm' pattern='!-f'
=> not-matched
[blah blah blah] (1) pass through /toto.htm
Now I can disable this log if I want to keep space on my disk.
I must admit I read the description for REQUEST_FILENAME in apache2.2 several times before
noticing that it was just the answer too used to read too fast! Thanks to
this old post that made me re-read slower !
A web server is a Server side application designed to process HTTP
requests between client and server. HTTP is the basic and very widely
used network protocol. We all would be familiar with Apache HTTP
Server.
Apache HTTP Server played an important role in designing
what web is today. It alone has a market share of
38%
.
Microsoft IIS comes second in the list having a market share of
34%
.
Nginx
and
Google's GWS
comes at number 3 and 4 having a market share of
15%
and
2%
respectively.
Last day I came across a web server named
Caddy
.
When I tried to inquire it's features and deployed it to testing, I
must say it is amazing. A web server which is portable and do not need
any configuration file. I though it is a very cool project and wanted
to share it with you. Here we have given Caddy a try!
What is Caddy?
Caddy
is an alternative web server easy to
configure and use.
Matt Holt
The Project leader of
Caddy claims that Caddy is a general-purpose web server, claims to be
designed for human and it is probably the only of its kind.
Features of Caddy
Speedy HTTP requests using HTTP/2.
Capable Web Server with least configuration and hassle free
deployment.
TLS encryption ensure, encryption between communicating
applications and user over Internet. You may use your own keys and
certificates.
Easy to deploy/use. Just one single file and no dependency on
any platform.
No installation required.
Portable Executables.
Run on multiple CPUs/Cores.
Advanced WebSockets technology interactive communication
session between browser and server.
Server Markdown documents on the fly.
Full support for latest IPv6.
Creates log in custom format.
Serve FastCGI, Reverse Proxy, Rewrite and Redirects, Clean URL,
Gzip compression, Directory Browsing, Virtual Hosts and Headers.
Available for All known Platform Windows, Linux, BSD, Mac,
Android.
What make caddy Different?
Caddy aims at serving web as it should be in the year 2017 and
not traditional style.
It is designed not only to serve HTTP request but also human.
Loaded with Latest features HTTP/2, IPv6, Markdown,
WebSockets, FastCGI, templates and other out-of-box features.
Run the executables without the need of Installing it.
Detailed documentation with least technical description.
Developed keeping in mind the need and ease of Designers,
Developers and Bloggers.
Support Virtual Host Define as many sites as you want.
Suited for you no matter if your site is static or dynamic.
If you are human it is for you.
You focus on what to achieve and not how to achieve.
Availability of support for most number of platforms Windows,
Linux, Mac, Android, BSD.
Usually, you have one Caddy file per site.
Set up in less than 1 minute, even if you are not that much
computer friendly.
Nginx or "engine-x" is a
high-performance web server with low memory usage,
created by Igor Sysoev in 2002. Nginx is not just a
web server, it can be used as a reverse proxy for
many protocols like HTTP, HTTPS, POP3, SMTP, and
IMAP and as a load balancer and HTTP cache as well.
Apache is the most popular web server software
maintained by the open source community under Apache
Software Foundation. There are many add-on modules
available for apache like WebDav support or web
application firewalls like mod_security and it
supports many web programming languages like Perl,
Python, PHP trough native modules or by cgi, fcgi
and FPM interfaces.
In this tutorial, I will install and configure
Nginx as a caching reverse proxy for an Apache web
server on Ubuntu 15.10, Nginx is used as the front
end and Apache as the back end. Nginx will run on
port 80 to respond to requests from a user/browser,
the request will then be forwarded to the apache
server that is running on port 8080.
See also
micro_httpd which implements all the basic
features of an HTTP server in only 150 lines of
code.
Have you ever wanted to run a tiny, safe Web
server without worrying about using a fully
blown Web server that could cause security
issues? Do you wonder how to write a program
that accepts incoming messages with a
network socket? Have you ever just wanted
your own Web server to experiment and learn
with?
Well, look no further -- nweb is what you
need. This is a simple Web server that
has only 200
lines of C source code. It
runs as a regular user and can't run any
server-side scripts or programs, so it can't
open up any special privileges or security
holes.
This article covers:
What the nweb server program offers
Summary of C functions features in
the program
Pseudo code to aid understanding of
the flow of the code
Network socket system calls used and
other system calls
How the client side operates
C source code
nweb only transmits the following types
of files to the browser :
Static Web pages with extensions
.html or .htm
Graphical images such as .gif, .png,
.jgp, or .jpeg
Compressed binary files and archives
such as .zip, .gz, and .tar
If your favorite static file type is not
in this list, you can simply add it in the
source code and recompile to allow it.
Handling of static files, index files, and autoindexing;
open file descriptor cache;
Accelerated reverse proxying with caching; simple load
balancing and fault tolerance;
Accelerated support with caching of remote FastCGI servers;
simple load balancing and fault tolerance;
Modular architecture. Filters include gzipping, byte ranges,
chunked responses, XSLT, SSI, and image resizing filter.
Multiple SSI inclusions within a single page can be processed in
parallel if they are handled by FastCGI or proxied servers.
SSL and TLS SNI support.
Mail proxy server features:
User redirection to IMAP/POP3 backend using an external HTTP
authentication server;
User authentication using an external HTTP authentication
server and connection redirection to internal SMTP backend;
Authentication methods:
POP3: USER/PASS, APOP, AUTH LOGIN/PLAIN/CRAM-MD5;
IMAP: LOGIN, AUTH LOGIN/PLAIN/CRAM-MD5;
SMTP: AUTH LOGIN/PLAIN/CRAM-MD5;
SSL support;
STARTTLS and STLS support.
Tested OS and platforms:
FreeBSD 3 - 7 / i386; FreeBSD 5 - 7 / amd64;
Linux 2.2 - 2.6 / i386; Linux 2.6 / amd64;
Solaris 9 / i386, sun4u; Solaris 10 / i386, amd64, sun4v;
MacOS X / ppc, i386;
Windows XP, Windows Server 2003.
Architecture and scalability:
one master process and several workers processes. The
workers run as unprivileged user;
various kqueue features support including EV_CLEAR,
EV_DISABLE (to disable event temporalily), NOTE_LOWAT, EV_EOF,
number of available data, error codes;
sendfile (FreeBSD 3.1+, Linux 2.2+, Mac OS X 10.5),
sendfile64 (Linux 2.4.21+), and sendfilev (Solaris 8 7/01+)
support;
file AIO (FreeBSD 4.3+, Linux 2.6.22+);
accept-filter (FreeBSD 4.1+) and TCP_DEFER_ACCEPT (Linux
2.4+) support;
10,000 inactive HTTP keep-alive connections take about 2.5M
memory;
data copy operations are kept to a minimum.
Other HTTP features:
name- and IP-based virtual servers;
keep-alive and pipelined connections support;
flexible configuration;
reconfiguration and online upgrade without interruption of
the client processing;
access log formats, bufferred log writing, and quick log
rotation;
4xx-5xx error codes redirection;
rewrite module;
access control based on client IP address and HTTP Basic
authentication;
PUT, DELETE, MKCOL, COPY and MOVE methods;
FLV streaming;
speed limitation;
limitation of simultaneous connections or requests from one
address.
About: httping is a "ping"-like tool for HTTP requests. Give it a
URL and it will show how long it takes to connect, send a request, and retrieve
the reply (only the headers). It can be used for monitoring or statistical purposes
(measuring latency).
Changes: Binding to an adapter did not work and "SIGPIPE" was not
handled correctly. Both of these problems were fixed.
About: nginx is an HTTP server and mail proxy server. It has been
running for more than two years on many heavily loaded Russian sites, including
Rambler (RamblerMedia.com). In March 2007, about 20% of all Russian virtual
hosts were served or proxied by nginx.
Changes: The STARTTLS in SMTP mode is now working. In HTTPS mode,
some requests fail with a "bad write retry" error. The "If-Range" request header
line is now supported. uname(2) is now used on Linux systems instead of procfs.
Until now FastCGI was behind mod_php, java and mod_perl in
terms of popularity among web server administrators and web developers. But
times have changed and changed for good.
In the early days of web development when the CGI interface was the leader and
web servers were quite slow, developers felt that they needed a faster server
technology, that can be used to run their web applications on high-traffic web
sites. The solution to the problem seemed obvious the developers had to take
their CGI-based code and put it into the web server process.
With this solution, the operating system didn't have to start a new process
every time a request had been received, which is very expensive, and you could
write your application with a persistent functionality in mind and ability to
cache data between several different http requests.
These were the days when some of the most popular web server APIs were born
Internet Information Server's ISAPI, Netscape Server's NSAPI, and Apache's
module API. This trend created some of the best known and quite often used technologies
in web development like mod_php, mod_python, java servlets (and later jsp),
asp. But the conception that stays behind these technologies is not flawless.
There are many problems with applications that run inside your average web server.
For example mod_perl's high memory usage per child process can suck the available
ram, php's problems with threads can kill the whole web server, and many security
problems arising from the fact that the most popular web server (Apache) can't
do simple things like changing the OS user it executes the request with. For
quite some time there have been solutions, like putting a light-weight proxy
server in front of apache, installing third-parity software for IIS or using
php's safe mode and OpenBasedir (Oh GOD!) on apache, but these are not elegant
and pose other problems on their own. Also the hardware progress in the last
few years made the server modules obsolete.
In the mean time, when the server modules were gaining glory and fame, a little-known
technology with a different conception and implementation was born. It was called
FastCGI and the basic problem it was designed to solve was to make CGI programs
run faster. Later, it became clear that FastCGI solves many other problems and
design flaws that the server modules had.
How FastCGI works?
FastCGI runs in the web server process, but doesn't handle the request itself.
Instead it manages a pool of the so-called FastCGI servers outside of the web
server process and when a request arrives, the FastCGI manager sends the http
data through a socket to one of the available fastcgi servers to handle this
request. This strategy is quite simple and has the following advantages:
The FastCGI servers can be written in any language that has an api to
communicate through sockets
The FastCGI servers run outside of the web server thus improving stability
and allowing the web server to handle only requests for static data with
very little overhead. You won't need a front-end proxy for this. Thread-unsafe
applications can be run with
threaded web servers.
The FastCGI manager can change the owner of the FastCGI servers, which
allows the web administrator to have different virtual hosts served by different
OS users. (Anyone remember Apache2's
perchild MPM?)
The FastCGI servers are persistent processes, which serve requests many
times faster than standard CGIs.
In the beginning FastCGI was not so popular, because
its use of external processes and communication through sockets required more
resources to be allocated on the host system. Today this is not the case, because
for the last few years the hardware development made huge leaps ahead and system
memory is not so expensive anymore. In present days many of the
web servers have full support for FastCGI and the trend is to migrate the current
web applications to run under it. These are some of the most popular web servers
that have support for FastCGI:
Recently I needed a test-bed for scripts that generate HTML, and file access
didn't suffice. As usually, the first attempt at awk-ing the request
from a socket(1) executed script soon grew big and ugly with ever more
special cases being added. As the need to support ISINDEX and FORMs came up,
I re-wrote the whole thing in perl. Now the simple test aid has become
a program that can be used in similar situations when you need an HTTP server
quickly without worrying to install a CERN or NCSA server.
This one does not have all the features needed; in particular, it knows only
about text, HTML and GIF files and the support for CGI scripts is limited (just
enough to check if they work and produce correct output). It supports only HTTP
1.0 and only GET and POST requests.
(Aug 5, 2001, 18:00 UTC) (742 reads) (0 talkbacks) (Posted
by mhall)
Dialup connections can be frustrating.
Squid, a very popular piece of 'net caching software does a lot to cut bandwidth
demands, but it isn't built for dialups. This article shows how to change
that.
I was minding my own business, checking my snail
mail at the office, when all of a sudden I was assaulted: "IIS Most Used Web
Server Among Fortune 500 Sites" slapped me upside the head like a two-liter
shot of Mountain Dew. For those of you who haven't read the cover story of Volume
5 Number 10 of ENT or seen
the
article on their website--go do that first, and then come back.
After recovering from what I though must have
been wrong, biased marketing research, I set out to prove ENT wrong
Results
I set about this study with a mission: To objectively
collect data on the "brochure sites" of the Fortune 500. My secondary objective,
of course, was to disprove the ENT study. My results were almost identical
to theirs, however. If you look at the entire Fortune 500, from
General Motors all the way
to ReliaStar Financial,
IIS reigns king. If you, however, look at subsets of the Fortune 500 and the
types of companies represented, the picture is much different. Netscape Enterprise
Server dominates until the Fortune 300 is looked at as an aggragate, where both
Netscape and Microsoft share 41 percent of the market. This information
was embedded in the ENT article as well.
Recently I needed a test-bed for scripts that generate HTML, and file access
didn't suffice. As usually, the first attempt at awk-ing the request
from a socket(1) executed script soon grew big and ugly with ever more
special cases being added. As the need to support ISINDEX and FORMs came up,
I re-wrote the whole thing in perl. Now the simple test aid has become
a program that can be used in similar situations when you need an HTTP server
quickly without worrying to install a CERN or NCSA server.
This one does not have all the features needed; in particular, it knows only
about text, HTML and GIF files and the support for CGI scripts is limited (just
enough to check if they work and produce correct output). It supports only HTTP
1.0 and only GET and POST requests.
An HTTP server (a Web server) is an application that "listens"
for connection requests from client processes (usually on other machines). Upon
receiving a request, the server creates a new connection for the client, and
then goes back to listening for other requests. This new connection is created
in its own process on the server because the act of waiting for a connection
is usually a "blocking" action during which no other processing can take place.
As I'll show you later, there are some obstacles and detours when building such
an application with Win32 Perl.
A lot of the pain in writing an HTTP server can be eased by
using the libwww-perl library for Perl. For those of you who have used it before,
Listing 1 may look a little odd. I'm actually using the new LWPng module,
which supports many HTTP 1.1 features, including persistent connections (see
the sidebar titled "
Using LWPng").
A persistent connection maintains an open communication channel
between the client and server until one side or the other forces the connection
to be closed. Using such a connection, it's possible to create a Win32 server
process that can fork and maintain communications with the client.
Then we can establish several clients talking to the same server, each one in
a conversational loop.
The scenario plays out like this: An HTTP daemon (we'll use
the HTTP::Daemon module) is set up to listen for connection requests on a specified
port. Clients (such as LWP::UA) contacting the server know they can expect to
find an HTTP daemon listening at this socket, so they send a valid HTTP request
(LWP::Request). The server binds the local and remote sockets to form a connection
(HTTP::Daemon::ClientConn). Once the connection is established, bidirectional
communication can take place. At this point, the server can close the connection,
or start a new process on the server to handle communication with the client.
This is the moment of truth-the server needs to do three things
to successfully hand off the client to another process:
Create the new process
Transfer the open connection to the new process
Let the new process access the parent process "environment"
The fork command in UNIX is used to start a new
process. It does this by creating a child process that is a clone of the calling,
or parent, process. The new process not only shares the open connections, but
the same data space, call stack, and memory heap as well. Both the parent and
child processes continue executing the code that follows the fork
command. If we want the child process to take on a new identity, we can call
exec, which replaces the running process with a new program, usually
pulled from a disk file.
Win32 programmers have a rougher go at it, because the Win32
process model differs greatly from UNIX. First, Win32 has no concept of "parent"
or "child" processes. One process can create another, but they are essentially
peers. The Win32 architecture doesn't allow the creation of a new process that
shares the environment of another process. For example, each new process in
Win32 maintains its own instances of referenced DLLs. The Win32 API call
CreateProcess essentially combines the fork and
exec functions of UNIX into a single action called a spawn.
(For more information about Win32 processes in a Perl context, see Win32 Perl
Programming: The Standard Extensions by Dave Roth.)
However, the usefulness of the fork hasn't been
lost on the Win32 crowd, and there are emulations available for those who want
to compile Perl using Cygwin32, or wait for ActiveState's next major release
of its Win32 Perl implementation. It should be noted that these solutions are
emulations of a core UNIX function that is simply not supported at the operating
system level. The implementation-specific solution I came up with is just that-specific
to a Perl HTTP::Daemon in Win32. With the disclaimers out of the way, let's
look at the code.
(Aug 5, 2001, 18:00 UTC) (742 reads) (0 talkbacks) (Posted
by mhall)
Dialup connections can be frustrating.
Squid, a very popular piece of 'net caching software does a lot to cut bandwidth
demands, but it isn't built for dialups. This article shows how to change
that.
In an effort to make users (web publishers)
more aware of the issues and benefits of designing for Web caches, instead
of trying to circumvent them, there is now a document that explains the
why, what and how of caching in (hopefully) easy-to-understand language.
It's particularly important to get this document to an audience of high-volume
Web sites and hosting services.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.