May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Introduction to Perl 5.10 for Unix System Administrators

(Perl 5.10 without excessive complexity)

by Dr Nikolai Bezroukov

Contents : Foreword : Ch01 : Ch02 : Ch03 : Ch04 : Ch05 : Ch06 : Ch07 : Ch08 :

Prev | Up | Contents | Down  | Next

5.7. HTML Matching Examples


A good example of usefulness of lazy quantifies is the solving of the problem of removing or modifying a specific tag from HTML.

So .*? matches zero or more times, but rather than skip as much text as possible before finding what following it in regex like greedy matching does, it skips (consumes) as little as possible and stops at the first match. The following example well illustrates the difference:

# greedy pattern
s/<span.*>/ /g;  # incorrect way to remove tag with all attributes. 
		 # if there are multiple tags in the same line all of them will "slurped" inside .*
# non-greedy pattern
s/<span\s+.*?>/ /g; # more correct regex, although it still have problems. 
                    # You need \s after span to ensure that sting does continue. 

If there are more the one HTML tags on the line greedy regex will stops only at last ">" "eating" extra text in between.

To match a tag we need to use non-greedy matching. For example, regex  <.+?>, which matches most HTML and XML tags.  To match both opening a closing tag we can use <\\?\w.+?>

Now let's try to remove some HTML tag, for example <tt>. Although the following regex looks promising


for extracting the content between <tt> and </tt> but as you already guessed it is wrong as greedy matching will "slurp too much is there are multiopm< tt> tags inside this string.

If we rely of this regex to extract information then in case we have several <tt> tags on the line we will ecounter a nasty mistake.

For example this regex will extract into $1

"Advanced editors like <tt>Slickedit</tt> can edit <tt>Perl</tt> code effectively." 

instead of strings  SlickEdit and  Perl" the string:

"Slickedit </tt> can edit <tt>Perl code"

As we can see opening <tt> was matched with </tt> from a different phrase producing broken HTML. This is a classic case where non-greedy quantifiers have an edge. If we change out regex to:


One way to avoid such behavior is to preprocess html test ensuring that there only one tag per element of array and  the process elements fo array one by one. But this approach while very promising  also has its own set of problem.  In any case splitting HTML text into elements and  then processing them one by one makes a lot of sense:


we have better chances, although it is easy to create an example where this regex also fail.

This approach doesn't remove tags from all possible HTML correctly, because a single regular expression is not an acceptable replacement for a real lexical parser.

Imagine if we were trying to pull out everything between bold-italic pairs:

<b><i>this</i> and <i>that</i> are important</b> Oh, <b><i>me too!</i></b>

A pattern to find only text between bold-italic HTML pairs, that is, text that doesn't include and closing tags, might appear to be this one:


You might be surprised to learn that the pattern doesn't do that. Many people incorrectly understand this as matching a "<b><i>" sequence, then something that's not "<b>", and then "</b>", leaving the intervening text in $1. While often it works out that way due to the input data, that's not really what it says.

This regex does not provides neither for nesting (while this does not make much sense and is an error, in typical HTML those tags can be nested at least to the level 2 in the string) nor for separate close of individual tags. It just matches the shortest leftmost substring that satisfies the entire pattern. In this case, that's the entire string. If you expect it to extract only stuff between "<b>" and its corresponding "</b>" on the same nesting level, ignoring any nested tags with bold tags, it would be incorrect.

To get correct result we need to use "negative matching" capability of Perl. Applying this to the regex above we will get something like:


or better:


Jeffrey Friedl points out that this quick-and-dirty method isn't particularly efficient. He suggests crafting a more elaborate pattern when speed really matters, such as:

    [^<]*  # stuff not possibly bad, and not possibly the end.
 # at this point, we can have '<' if not part of something bad
     (?!  </?[ib]>  )   # what we can't have
     <                  # okay, so match the '<'
     [^<]*              # and continue with more safe stuff
    ) *

Problem with nested tags

In general, we cannot match all of these, with one regular expression, since they can be nested, like:

<ul> Case 1  
          <li> Subcase 1 
          <li> Subcase 2 
That means that we a while loop to process that tag with such a condition which breaks when our transformation is accomplished.

This also can be done with indirect look by using recursion.

For many tags this problems does not exist as in well-formed HTML they are never nested. For example such HTML tags as <b>, <i> are never nested in well-formed HTML. The problem is that you never know if the HTML is well formed or not unless you pipe it via some sort of verifier.

Nesting does not matter if you want to remove or transform all particular tags or tags combinations

Nesting also does not matter if you want to remove all tags. In this case simple matching will work. For example to remove all <tt> tags we can try:

m/<tt.*?>(.*?)<\\tt>/ /sg
Here are several examples from one of my Perl scripts:
$news_item=~s/\&nbsp;/ /sog;
$news_item=~s/<!--\[if.*?>/ /osg; # this is important as leftovers hamper visibility -- Dec 8, 2014 
$news_item=~s/<link.*?>/ /osg;
$news_item=~s/<!\[endif]-->/ /osg;

Decoding a tag with http// and ftp:// URLs

In <a> tag the href attribute contains the url of the object you are linking in the page. For example:

<a href=""
<a href="">

For simplicity, lets suppose that they aren't split between lines. In order to extract them you need to detect the string <a href=" and then consume everything from it to closing double quote("). 

This translates into something that looks like this:


This did take a little trial and error. We use "(?:"  ")" brackets so we don't get any backreferences, and the (?:ftp|http) is self explanatory

Note that this pattern is not perfect. If you have backslashes spaces in the http tag in particular, this won't work. Or, if you have a http daemon running on a different port (http://site:8080 for example) it won't work. However, it is as we say 'close enough': if you want to improve it to handle such cases, go right ahead.

Let's now make a loop which extracts all http, and ftp tags from a given file, for example a bookmarks file:

undef $/; # read all the file as a single string
my $fd = new FileHandle("$ARGV[0]");
$line = <$fd>; # now the while file is in $line
while ( $line =~ m{((?:ftp|http)\://\w+(?:\.\w+)*)}g ){
   my $tag = $1;
   push(@tags, $tag);

Here, we chop off the last character, since in a bookmarks file these tags are in double quoted strings. We shall approach this problem more directly next, when we consider matching a double quoted string.

Prev | Up | Contents | Down | Next



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019