Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Slightly Skeptical View on Perl: Perl Warts and Quirks

Dr Nikolai Bezroukov


It's not a secret that Perl is structurally flexible, and the conventional wisdom is that Perl gives you "enough rope to hang yourself". Funny. But that's not the gripe — go ahead, hang yourself if you want. That's freedom. The problem is that Perl also gives you enough rope to hang others.

The effect of Perl's flexibility on collaborative development:

Abstract

Perl + C or even shell+Perl+C can be very efficient programming paradigm. Especially in virtual machine environment when application typically "owns" the machine.  You can use  shell for file manipulation and pipelines, Perl for high-level data structure manipulation and C when Perl is insufficient or too slow (the latter question for complex programs is non-trivial and correct detection of bottlenecks needs careful measurements).   

Shell can actually provide  "in the large" framework of complex programming system serving as a glue for the components.

From the point of view of typical application-level programming Perl is very under appreciated and very little understood language. Almost nobody is interested in details of interpreter, where debugger is integrated with the language really brilliantly.  Also namespaces in Perl and OO consrtucts are very unorthodox and very interesting design.

References are Perl innovation: classic CS view is that scripting language should not contain references.  Role of list construct as implicit subroutine argument list is also implemented non trivially (elements are "by reference" not "by name") and against CS orthodoxy (which favors default "by name" passing of arguments). There are many other unique things about design of Perl. All-in-all for real professional like me Perl is one of the few relatively "new" languages that is not boring :-).

Syntax of Perl is pretty regular and is favorably compared with the disaster which is syntax of Borne shell and derivatives as well as with syntax of C and C-derivatives. Larry Wall  managed to avoid almost all classic pitfalls in creating of the syntax of the language in which creators on PHP readily fell ("dangling else" is one example).   Perl license is a real brilliance.  Incredible from my point of view feat taking into account when it was done.

It's very sad that there no really good into for Perl written from the point of view of CS professional despite 100 or more books published.

All-in-all Perl is a great language. But even sun has dark spots...

 

Introduction

A small, crocky feature that sticks out of an otherwise clean design. Something conspicuous for localized ugliness, especially a special-case exception to a general rule. ...

Jargon File's definition of the term "wart"

 

This not an anti-Perl page, this is a pro-Perl page as I use Perl as my main scripting language. This is just a slightly skeptical approach to the language ;-). All language have quirks, and all inflict a lot of pain before one can adapt to them. Larry Wall released Perl 1.0 in 1987. In 15 years a lot of things changed. Also, a determined programmer could pull it off: to mutate an oldie, real programmers can write Perl in any language.

Warts are inevitable in any process involving complex human decision-making; does somebody really want to argue that Perl, of all things, doesn't have any? Once learned those warts and quirks become incorporated into your understanding of the language. But there is no royal way to mastering the language. The more different is one's background is, more one needs to suffer. And here this page might help a little bit. At least I hope so...

Generally any user of a new programming language needs to suffer a lot ;-). In this respect Perl is not that different then other scripting languages, but it usually inflicts a lot of pain on the beginners. It's important to understand that Perl can overwhelm a newcomer and most people, including me are better off selecting some subset of the language and sticking to it in most cases and going into "wildness" only when necessary. Actually old Perl 4 may represent such a subset...

Much depends on your background and if you a Unix administrator with some knowledge of ksh or bash you are in better position then if you move to Perl from, say, VBscript. BTW some sufferings can be avoided by using pb -- Perl Beautifier and a good specialized editor (like DzSoft Perl Editor) or IDE (Komodo). For the USA developers there is no excuse for not using both ;-)

There is no free lunch. Being a very powerful and expressive language, at the very beginning Perl might looks to people without any background in Unix and shell like a horrible mess similar to Lisp. But that's only temporary. When mastering a new language first you face a level of "cognitive overload" until the quirks of the language become easily handled by your unconscious mind. At that point, all of the sudden the quirky interaction becomes a "standard" way of performing the task.

Unlike Lisp Perl is well rooted in traditional Algol-style languages tradition. It's different, but not that different. For example regular expression syntax seems to be a weird base for serious programs, fraught with pitfalls, a big semantic mess as a result of outgrowing its primary purpose. On the other hand, in skilled hands (and using extensions to increase readability) with a good debugger and testing tools its a powerful and reasonably safe instrument...

One early sign of adaptation to Perl idiosyncrasies is when you start to put $ on all scalar variables automatically. The next step is to overcome notational difficulties of using different two operations ("==" and eq) for comparison -- the source of many subtle errors for novices much like accidental use of assignment statement instead of comparison in if statement C (like in if (a=1)... ). Before than happens please be vary of using complex constructs -- diagnostic in Perl is really bad.

Be skeptical and do not take words of Perl advocates like Randal L. Schwartz or Tom Christiansen for granted :-) Fancy idioms are very bad for novices. Please remember about KISS principle and try to write simple Perl scripts without complex regular expressions and/or fancy idioms. Some Perl gurus pathological preoccupation with idioms is definitely not healthy and is part of the problem, not a part of the solution...

Generally the problems mentioned above are more fundamental than the trivial "abstraction is the enemy of convenience". It is more like that badly chosen notational abstraction at one level can lead to an inhibition of innovative notational abstraction on others.

Casting of operands in comparisons: induced errors

I suspect that forthcoming Perl 6 (if it is ever implemented), might help to solve some of the deficiencies of previous versions. But in Perl 5 there is a nice possibility of making slick, subtle errors if you use Perl and another language (for example C) simultaneously.

Perl scripts are using two different sets (==, >, < and eq, gt, lt) of comparison operators. The first set casts both operands into numeric representation and the second set casts them into strings. That is a departure from C tradition (and general idea of casting as a separate operation even in typeless languages). In Perl 5 casting and operations are mixed together and that created problems.

For example, if today I programmed in C and then switched to Perl, I would automatically write:

$a='abba';
$b='baba';
if ($a==$b) { 
	print "Strings are equal\n";
    }

with quite interesting results and the real possibility spending some time again and again catching this type of errors (induced errors).

That's probably the most impressive progress in this area of language design blunders after famous if (i=1) ... vs if (i==1) ... design solution in C :-).

But C designers got into this hole by trying to minimize syntax as much as possible in order to fight the low reliability of typewriters. There is no such justification for Perl. So this "feature" should be legitimately classified as a wart.  It might be better to have an alternative version of  Fortran style comparison operations with the prefix that signify the casting like i.eq and s.eg

BTW shell resolved the same problem even in more clumsy way requiring different set of confitional expression delimiters as in

if [[ conditional_expression  ]]

and

if (( conditional_expression   ))

So worst solutions that in Perl definitely exist :-).

Redefinition of some C keywords: walltrap

Generally conditional statements represent strength, not weakness, of Perl. Most decision are sound, especially explicit prohibition of using a single statement in  then and else clauses of the if  statement. That permits easy reverse-factoring: you can insert statements into then and else clauses without introducing errors (or, more commonly, simply extend then  or else clause that contained just one statement with the additional statements):

Before: 
X;
    if (...) {
        Y;
    } else {
        Z;
    }
After:
    if (...) {
        X;  Y;
    } else {
        X;  Z;
    }

All-in-all the set of conditional operators in Perl is superior to most languages that I know. Some solutions are pretty innovative. For example treating a regular { } block as a loop that executes exactly one time and thus making loop control statements applicable to a block.  continue block in the loop in an interesting idea, but seldom used in practice. It might be that more useful is a discontinue block -- the block that executes only if loop exits though the failure of the header test, not via other loop termination statements (break, goto, etc). The only problem that is completely unresolved is classic n+1/2 loops (see Knuth paper). But as keyword then is not used the language, Perk can be extended to allow then-else clauses after loops as Knuth suggested many years ago.

But one blunder still was made: several C keywords were redefined without really good reasons for breaking compatibility. For those unfortunate, who use Perl along with say, Python or C++ there are good chances to get into this trap, which I would call walltrap:

Overcomplexity 1: suffix conditional operators

Perl 5 is a very complex scripting language. And one needs to understand clearly that Perl belongs to the family of complex non-orthogonal languages, the family that was started by PL/1 (and successfully continued by C++ ;-)

The classic example of overcomplicating the language in Perl is the postfix if statement. It doesn't add any capability to the language, and it confuses new users. At the same time with some discipline seasoned professionals can benefit from it, using as a notation for an important special case -- exits from the loops. In this role this obscure feature that does not have rational explanation other the Larry design to overcomplicate the language is might be even useful.

But please do not use it as the mean of avoiding curvy braces in the if statement with just one statement in then part as I often see in Perl scripts published in O'Reilly books. This is plain vanilla perversion (or more precisely an addition to overcomplexity ;-). Just calculate the number of symbols in two equivalent Perl statements:

if ($max < $i) { $max = $i } # standard notation

$max=$i if ( $max < $i ); # popular perversion

Believe me as a professional educator, the second example is both counterintuitive and redundant. It's far from being a more concise notation either: you win just one symbol; semicolon is not needed in the last statement of the block (before "}" ) in Perl starting probably with version 5.6. IMHO this is a perfect example of a pervert syntax feature, a wart ;-).

Generally there is a contradiction between ease of learning and ease of use by professionals. In this particular case (and in best Unix tradition) compromise tend to favor high-level professionals and only such professionals: usage in loop increases the understandability of the construct as you can see from the somewhat artificial example below:

for ($i=0;;$i++) {
 $line=substr(<>,0, $i);
 last if ($line eq 'EOD' && $prev eq 'EOD');# this is a reasonable usage
 print "$i, prev=$prev current=$line";
 $prev=$line;
}

Overcomplexity 2: Separate sets of build-in functions for strings and lists

Another wart in Perl 5 is two separate sets of built-in functions for strings and lists because function names also presuppose fixed casting of operands. This is connected with the idea of implicit casting in general. For example, substr is not applicable to lists and arrays and a separate, slightly different function slice should be used. In reality people are never able to muster the both sets and use small subset, while if those things are uniform they can use probably a full set or at least a larger, more useful subset. It might be better to redefine substr/slice  functions in namespaces like s.sub and l.sub

And that is applicable to other functions too. For example, using pop and push might be useful for strings too, and chop might be useful for arrays. but sorry those function are for arrays/lists only. 

also some string built-in functions like chop/chomp are copycats from badly thought out Unix utilities and deserve some generalization. For example why cannot I write chop(2) or even chop(2..3)

The WEB tide raised Perl to prominence, much like it raised HTML, JavaScript, PHP and Java. But after the language became popular this very fact became an extremely important "feature" of the language and should be considered as such. Nothing succeed like success. It's the same situation that exists with VB -- popularity is probably the most important feature of the language. now with unclear status of Perl 6 project Perl popularity is going down and many users consider Python as an alternative. I do not consider Python as a answer. despite its warts Perl is somewhat higher level language and it will probably remain as such with version 6 (if it will be completed ;-) . That is a big advantage...

Difficulties in creating moderately complex data structures

Perl provides reasonably convenient and powerful operations of arrays/lists and hashes. But it does not provide a clean way to work with list of lists and hashes of hashes. Actually Perl uses very arcane syntax for lists of lists and that's why a lot of Perl programmers try to avoid using them creating home grown solutions like storing the second level list as a string with some separator (for example ":") and then splitting this string each time you need to convert it to the list. Adoption of pointers in scripting language looks more like a sign of weakness then the sign of strength. Here is how lists of lists are covered in Perl documentation:

A list of lists, or an array of an array if you would, is just a regular old array @LoL that you can get at with two subscripts, like $LoL[3][2]. Here's a declaration of the array:
 # assign to our array a list of list references
 @LoL = (
 [ "fred", "barney" ],
 [ "george", "jane", "elroy" ],
 [ "homer", "marge", "bart" ],
 );
 print $LoL[2][2];
Now you should be very careful that the outer bracket type is a round one, that is, a parenthesis. That's because you're assigning to an @list, so you need parentheses. If you wanted there not to be an @LoL, but rather just a reference to it, you could do something more like this:
# assign a reference to list of list references
$ref_to_LoL = [
[ "fred", "barney", "pebbles", "bambam", "dino", ],
[ "homer", "bart", "marge", "maggie", ],
[ "george", "jane", "alroy", "judy", ],
];
print $ref_to_LoL->[2][2];

Notice that the outer bracket type has changed, and so our access syntax has also changed. That's because unlike C, in perl you can't freely interchange arrays and references thereto. $ref_to_LoL is a reference to an array, whereas @LoL is an array proper. Likewise, $LoL[2] is not an array, but an array ref. So how come you can write these: 
 
 $LoL[2][2]
 $ref_to_LoL->[2][2]

instead of having to write these:

 $LoL[2]->[2]
 $ref_to_LoL->[2]->[2]

Well, that's because the rule is that on adjacent brackets only (whether square or curly), you are free to omit the pointer dereferencing arrow. But you cannot do so for the very first one if it's a scalar containing a reference, which means that $ref_to_LoL always needs it.

Problems with lists of lists and hashes of hashes in Perl create some pretty obscure code that is difficult to maintain, and in a couple of cases I was surprised that the equivalent C++ solution was cleaner in some ways.

Weak pipe support

Piping support in Perl is weak (actually even worse than in shell), but let's hope that it will be improved in Perl 6. Actually that's not completely fair statement -- there is a special module IPC-Run by Barrie Slaymaker([email protected]). After a user's spun up on bash/ksh, it provides useful piping constructs, subprocesses, and either expect-like or event loop oriented I/O capabilities. But still it's better to have those facilities in the language.

Perl as a language that broke with Unix tradition: 
not a big a problem, but still a problem

I reject the assumption that because some people have difficulties with a language, it is a bad language. But I do not like a Unix tool that breaks with the Unix "hyper tools" idea, the idea of a lot of small utilities cooperating to achieve tasks with the shell serving as a glue. Perl was designed as a Swiss knife, universal language and it is written in best IBM (or Microsoft Office, if you wish ;-) tradition represented by PL/1 and MS Word.

That might be not that bad if we think about Perl as not yet another Unix utility, but as a replacement of the shell. But the problem is the Perl failed to replace shell (although Perl shell does exist; fake shell can be easily built in Perl).  For example piping support should be much better and interface with the existing Unix tools should be better designed.

This is not the case and the result is Windows-style programming in Unix environment: Perl programmers tend to reinvent the wheel writing replacements when a proper way is to construct a pipe from existing tools. But at the same time Perl toolbox is now competitive with the old Unix toolbox and most utilities are either not needed (sed, awk) or are available as modules and built-in functions (grep, find). It is pipe support that sucks... 

So the question that we need to ask is: "Is one hundred simple Unix tools all with numerous and slightly different command line options an easier language than Perl ?" I do not think so. Therefore breaking with the tradition sometimes might make sense. But the problem is that Perl adds to the Unix complexity instead of helping to decrease it.  It adds another parallel Universe of doing things in Unix. Larry Wall might like it, but IMHO this is a bad development. In this case absence of strong "Microsoft-style" push for the replacement of old tools proved to be detrimental to both Perl and Unix.

Weak imbedding capabilities

Interface with C is not clean and from this point of view Perl is a bad imbedded language. I like the idea of using the same language for both scripting and as a macro language (like REXX and Tcl).  But Perl historically failed in this area. It was overtaken by PHP for embedding into Web pages; it might be the size and complexti of the language was detrimental for this task. Very few Unix tools adopted interface with Perl. I can mention only vim, PostgreSQL (plPerl) and Exim.

Side effects

Variables used in foreach loop in Perl is actually not a variable at all. It is a pointer to the current member of the list. Changing it changes the content of the list.

Implied assignments (usually to $_, but $1 is also a candidate ;-) sometimes lead to a rather subtle errors as the content of $_ may change if you add some code between the point where you set is and the point where you use it. It not that commnom and danger of  $_ may well be overblown, so I mention this just because this feature is often discussed in this context.  Compare:

while (<>) {

    if(/EOF/) {....}

}

and

while (<>) {

    &sub_that_changes_it;

    if(/EOF/) {....} # we are now checking

}

Belief that  $_ improves legibility that is characteristic of several Perl gurus, including Tom Christiansen. This is a very questionable belief and benefits are open for discussion. Compare an example adapted from Perl Style Use $_ in Short Code

  Traditional style

while ($line = <STDIN>) {
        next if ($line =~ /^#/);
        $line =~ s/left/right/g;
        print "$ARGV:\n$line";
   }

 "Obsessed with idioms" style

while ( <> ) {
        next if (/^#/);
        s/left/right/g;
        print "$ARGV:\n$_";     
    }

I doubt that the second example more clear and it might be more prone to maintaince problems.

Obscure Idioms

Tom Christiansen in his Perl Style Program Perl  wrote:

Fall not into the folly of avoiding certain Perl idioms for fear that someone who maintains your Perl code won't understand it because they don't know Perl. This is ridiculous! That person shouldn't be maintaining a language they don't understand. You don't write your English so that someone who only speaks French or German can understand you, or use Latin letters when writing Greek.

And here is the discussion of one idiom after which you might draw your own conclusions about of level of hair splitting permissible in scripting languages  ;-)

FMTYEWTK about split --on Jan 20, 2004 at 19:49

split //, $string is the usual idiom for splitting a string into a list of its characters. Why it works as it does is actually quite complex.

First of all, match (m//) and substitution (s///) have a special case for an empty regex: they will apply the last successful regex instead, so:

if (condition) {
        $str =~ /ab/;
    } else {
        $str =~ /ef/;
    }
    $str2 =~ //;
may apply the regex /ab/ or /ef/ (or some previous regex, if that match didn't succeed) to $str2. (Note that "previous" here means execution order, not linear order in the source). If no previous regex succeeded, an empty regex is actually used.

This is a fairly volatile feature, since any intervening code that uses a regex will change the results (e.g. a regex in a tie method implicitly invoked, or in the main code of a require'd file). It's also a barrier to integration of the defined-or patch (see defined or: // and //= and Re: Perl 5.8.3) to 5.8.x, since a // where perl may be expecting either an operator or a term could mean defined-or or could mean ($_ =~ //). Without the feature, the latter would be overwhelmingly less likely to occur in real code.

People more often use this "feature" by accident than on purpose, with code like m/$myregex/ where $myregex is empty (since the "is it an empty regex" test occurs after interpolation). One solution is to use m/(?#)$myregex/ if you anticipate that $myregex may be empty.

But all that is beside the point, because special treatment of // (documented with respect to s/// and m// in perlop) is not a feature of perl's regexes but a feature of the match and substitution operators, and doesn't apply to split at all.

So what does happen when you say split //, $str?

Well, in general terms, split returns X pieces of a string that result from applying a regex X-1 times and removing the parts that matched, so split /b/, "abc" produces the list ("a","c"). (Throughout, I will ignore the effects of placing capturing parentheses in the regex.)

Similarly, split //, "ac" matches the empty string between the letters and returns ("a","c").

The analytic of mind will note that there are also empty strings before and after the "ac". Spreading it out, the regex will match at each //: "// a // c //", making 3 divisions in the string, so you might expect split to return a list of the four pieces produced ("","a","c",""), but instead a little dwimmery comes into play here.

Dealing first with the empty string at the end, split has a third parameter for limiting the number of substrings to produce (which normally defaults to 0, and where <= 0 means unlimited), so split /b/, "abcba", 2 returns ("a","cba"). As a special case, if the limit is 0, trailing empty fields are not returned. However, if the limit is less than zero or large enough to include empty trailing fields, they will be returned: split /b/, "ab", 2 for example does return "a" and an empty trailing field, while split /b/, "ab" returns only an "a".

The same provision applies to the empty string following the zero-width match at the end of the string. split //, "a" returns only the "a", while split //, "a", 2 returns ("a","").

(I said "normally defaults to 0" because in one case, this doesn't apply: if the split is the only thing on the right of an assignment to a list of scalars, the limit will default to one more than the number of scalars. This is intended as an optimization, but can have odd consequences. For instance, my ($a,$b,$c) = split //, "a" will result in the split having a default limit of 4, obverting the usual suppression of the empty trailing field: split will return ("a",""), leaving $b blank and $c undefined.)

But there is also an empty string before the zero-width match at the beginning of the string. The above methodology doesn't apply to that. If you say split /a/, "ab" it will break "ab" into two strings: ("","b"), whether or not limit is specified (unless you limit it to one return, which basically will always ignore the pattern and return the whole original string).

Similarly, split //, "b" doesn't base returning or not returning the leading "" on limit. Instead, a different rule applies. That rule is that zero-width matches at the beginning of the string won't pare off the preceding empty string; instead, it is discarded. So while split /a/, "ab" does produce ("","b"), split //, "b" only produces ("b").

This rule applies not only to the empty regex //, but to any regex that produces a zero-width match, e.g. /^/m. (While on the topic of /^/, that is special-cased for split to mean the equivalent of /^/m, as it would otherwise be pretty useless.) So split /(?=b)/, "b" returns ("b"), not ("","b").

One last consideration, that also plays a part with s/// and m//: if you match a zero-width string, why doesn't the next attempt at a match also do so in the same place? For instance, $_ = "a"; print "at pos:",pos," matched <$&>" while /(?=a)/g should loop forever, since after the first match, the position is still at 0 and there is an "a" following. Applying this logic to split //, you can see that the // should match over and over without advancing in the string. To prevent this, any match that advances through the string is only allowed to zero-width match once at any given position. If a subsequent match would have come up with a zero width at the same position, the match is not allowed. This rule applies whether perl is in a match loop within a single operation (s///, split, or list-context m//g) or in a loop in perl code (e.g. the above 1 while m//g), or even two independent m//g matches.

For example: $_ = "3"; /(?=\w)/g && /\d??/g && print $&;  does print "3", even though the ?? requests a 0 digit match be preferred over 1 digit, because a 0-length match isn't allowed at that position.

(Update: s/FMTEYEWTK/FMTYEWTK/; googlefight shows the latter winning by 10:1)

Update: this isn't really a tutorial, or at least it's an inside out one. (That is, it's taking a single line of code and explaining how lots of different things affect (or don't affect) it, rather than setting out to explain those different things generically). If time allows, I may rewrite it as one. There's lots of good stuff to talk about with split.

Update: added links to defined-or stuff; thanks graff, Trimbach, hardburn

 

Perl as an example of the New Jersey
"Worse is Better" approach

All in all, Perl is a good examples of the New Jersey "Worse is Better" approach. It works, it is here now, but it's far from being small, simple and bug-free implementation. You should generally consider Python or TCL+C for sizable projects. But for small to medium size projects (let's say below 50K lines) Perl makes excellent integration of the capabilities of the existing Unix text oriented filters, given its text oriented approach, a vast amount of operators and built-in functions.

If you know Perl you simply can ignore the existence of a lot of traditional Unix utilities that probably outlived their usefulness anyway. You can connect small Perl scripts using pipes and accomplish the same task that old Unix utilities do but more easily. So, the traditional "worse is better" holds because "better is the enemy of done".

The performance problems that are attributed to the text orientation are often overblown. I think that expressing things using text primitives and pipes with the set of filters (written in C if efficiency is really important) leads to simpler and more manageable programs that other notations including object-oriented approach. In comparison with OO-style solutions you trade fashionable notation and buzzwords for a lot more power. I do not recommend use or study Perl OO-oriented features: if you need them, then Python is a better deal.

Perl conversions-on-demand between arithmetic and string types leads to problems that are well known to seasoned PL/1 programmers -- all is well until Perl makes a wrong decision and you will end up searching for this error for a week or more.

Again, like with people, flaws in languages are often continuation of its strong points -- some are inherent to the language and beyond redemption, others could in theory be fixed, but it's probably too late to do this. Anyway, one need to live (and survive) in this far from perfect world. Do not believe extremists that claim that a single language is perfect for all tasks. that's simply not true. Perl has its place and its different then the place of Python although they generally overlap a lot.

In any case the rise of Perl to prominence (like all other popular languages) is to certain extent accidental and is a result of being at the right place at right time and is not directly connected with the quality of the language. Larry Wall is a very controversial language designer and for one good decision he probably makes another bad. Here being a professional linguist does not help. You need to be a computer scientist. Programming languages are a class of its own and blunder is a blunder, no matter how many time somebody repeats magic words "natural language". Although not exactly about false analogy with natural languages, but Alan Perlis, once aptly said:

When someone says ``I want a programming language in which I need only say what I wish done,'' give him a lollipop.

Conclusions

Situation with the language that is excessively complex became less tolerable when you need to program in several languages simultaneously. This "interoperability" aspect is probably the most troublesome problem when one is using Perl long with C or other major compiled language. I think that people need to learn some TCL just to restore sanity ;-). At the same time it would be a mistake to abandon Perl due to those problems. Those problems are pretty universal, resistance is futile and most programmers who need to work in Unix environment in fact end up having to learn several languages including C/C++, TCL, bash/ksh and Perl. If you need to work on both Windows and Unix, situation is even worse: even if you need to work in two environments  you still have only one head ;-).

Some folk have argued that you should always use the "right language for the job" -- but if there are so many of them at some point one cannot afford to divert too many resources into "treading water". So minimizing your "language collection" by maximally utilizing Perl might be a viable strategy and there are always some ways to minimize the damage from incompatible or badly designed features outlined above.

Even awareness about existence of warts might help to lessen frustration with the language and find new innovative way to compensate (for example you definitely need to have a Perl-aware editor if you try to write or maintain a problem larger than way 10K, vi/vim are not enough). Beautifiers and syntax highlighters are two other tools that should be used. Humans are immensely flexible and can survive in situations when other animals give up ;-)

Paradoxically complex non-orthogonal Perl sometimes is better than the combination of shell and small Unix utilities for Unix scripting. Just think about collection of Unix utilities with all their immense number of parameters and special cases as yet another (pretty weird) language and you will understand why ;-). I definitely prefer Perl with all its warts.


P.S. In Perl 6 Larry Wall seems to manage to arrange another nice "walltrap" for unsuspecting users: he redefined for loop and included a new loop construct, that starts with the keyword "loop", but has the semantic close to the old C for loop. When I listened to him at Usenix in 1992 I thought that "enough is enough" and that might be a straw that breaks the camel back ;-)

P.P.S. This is a very limited effort to help learners of Perl (my students first of all, Internet learners as well, although some things might not be understandable without lectures). The official site for Perl language is www.perl.com. Like all official sites it's semi-dead ;-). It is also slightly O'Reilly biased and in some places contain O'Reilly self-promotional materials. Beware, it will never tell you the true value of some O'Reilly Perl books :-). Other recommended sites include Perl Mongers, use Perl and Perl user groups. Additional links are at Recommended Links. Some book reviews can be found at Recommended Books.

Webliography

Mark-Jason Dominus perl.com Sins of Perl Revisited November 30, 1999



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019