Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Perl Wiki as a System Administrator Tool

News Content management Recommended Links Frontpage as a poor man personal knowledge management system Information is not knowledge Digging out the information you lost Slightly Skeptical View on Enterprise Unix Administration
Information Overload Tagging  grep command fgrep command  Orthodox File Managers  Drinking from a firehose -- informing ourselves to death Total Commander
 MediaWiki (now in PHP) bread crumbs Kwiki UseModWiki TWiki Scoop DoxWiki
Research Materials on Nicholas Carr's "IT Does not Matter" Fallacy Blogs Problems with Wiki Wikipedia and the problem of vandalism Sysadmin Horror Stories  Humor  Etc

Introduction

"The lyfe so short, the craft so long to lerne,''

Chaucer c. 1340–1400
(borrowed from the bash FAQ

The main feature of modern enterprise IT is Mind-Boggling Complexity. Often it reaches completely ridiculous levels, that reminds me  the USSR over-centralized economy where most productivity was lost to friction created by excessive centralization and related overcomplexity.  That means that system administrators make a perfect case study of "drowning in the information ocean". For example memory about complex software system, such as operating systems exists mainly in two forms: people ("oldtimers") and documentation. People remember how things work, best of them have knowledge about particular software system (often undocumented) that can only be acquired in many years working with multiple version of the software system.  Sometimes they record this information somewhere, but mostly they don't. When they leave information is lost. In this happens for important software used in the corporation this is called corporate amnesia.  So sometimes corporation hire experts to explain to them how their software works (and pay is actually very good ;-). This is something like reverse industrial espionage, or, may be, "software archeology".  See for example my notes of Frontpage, the program for which I have 20 years daily usage experience.  I still learn new things about it almost weekly about it and even started to use it as poor man knoledge management system (Frontpage as a poor man personal knowledge management system)

The same is true about Perl, Python, Linux, and many other complex software programs and operating systems. 

For a modern software specialist or system administrator the problem is similar. Complexity of modern IT makes just orienting in this maze of badly structured and often unintelligible information almost impossible for humans undertaking. For example one typical problem with system administrators is that you encounter some problems, you study this problem, at last you solve it, Then when you again face the same problem year and half later you already forgot everything about it and need to rediscover everything again. 

In a way Unix system administration became a classic case of occupation in which even the most intelligent people can't do well without supporting tools. Its complexity is definitely above human abilities. Even the most bright humans.  That's probably one reason why job hunters now want only candidates 100% compatible with the job description.  Just an intelligent person will still need several months to get used to the new environment and tools even if he/she is an excellent specialist in Unix.  Each corporate environment is now idiosyncratic with its own unique set of tools and approaches to common problems. That's a very stupid approach, like many other things in modern IT,  but we have what we have.

Another problem is that when clueless IT brass brings system after system into enterprise environment without any concern of unintended consequences (Boomerang effect or blowback in CIA terms). They are mostly are driven by fashion and the desire to get higher annual bonus. I touched this theme a little bit in my old article (written before Oracle bought Sun) Solaris vs Linux in context of multiple flavors of Unix existing in modern datacenter.  Each new flavor means new headache independently of its advantages in other areas (such as price/performance ratio, which is tremendously higher for Intel based servers than RISK). So while Linux servers are definitely a "good thing" and have the best price/performance ration, adding them to datacenter with already present Solaris, HP-UX and AIX just ten years ago was not such a good idea for existing system administrators. They needed to learn new, different flavor of Unix and support it on daily basis. This enormity of additional information creates severe case of information overload. Now all those wars are at the past  and linux on Intel is a dominant enterprise OS, but linux itself split into multiple flavors so the same war is now replayed on a different, lower level.  For example on the level of RHEL vs SLES (and to lesser extent Debian vs Ubuntu). Add to this typical for Linux camp civil war between laptop-oriented and server-oriented developers (with the recent defeat of server oriented developers -- introduction of systemd in RHEL 7 and SLES 12) and the situation looks painfully similar to "Unix wars" between Solaris, Linux, AIX and HP-UX.

Add to those technical challenges dysfunctional nature of social environment in modern datacenter (see Surviving a Bad Performance Review) with a heavy doze of sociopaths in management ranks (especially micromanagers)  and you now understand a little bit more about the real nature and challenges of the work of a Unix sysadmin ;-)

Now it is virtually impossible for any individual system administrator to know "all"  Linux or "all" Solaris. To say nothing about another dozen of complex subsystems that are added on top of it (Apache, Oracle or other database,  Open View/Tivoli/Nagios, BMC, Data Protector, you name it).

That's why crutches in the form of personal knowledge database now became a much for any Unix sysadmin who respect himself and his own mental health. Having a personal knowledge database also helps to integrate knowledge of other people found of Internet and to relate it to your own environment and goals. It’s no longer sufficient for sysadmin to use regular paper lab journal. You need more advanced information technologies to weave all necessary knowledge into an understandable tapestry.  More recently, the law of unintended consequences has come to be used as an idiomatic warning that any excessively complex system tends to create unanticipated and often undesirable outcomes.

Now we need web-based content management and research tools that help to maintain accumulated knowledge, retrieve it on request and integrate their raw, primary data such as emails and Teraterm or putty logs.

Actually there are four  important and overlapping aspects of system administration that are often overlooked:

Systematizing your own knowledge

  "Folly is a more dangerous enemy to the good than evil. One can protest against evil; it can be unmasked and, if need be, prevented by force. Evil always carries the seeds of its own destruction, as it makes people, at the least, uncomfortable. Against folly we have no defense." 

Dietrich Bonhoeffer

A folly is usually defined as

  1. Lack of good sense, understanding, or foresight: an act of folly.
  2. An act or instance of foolishness: regretted the follies of his youth.
  3. A costly undertaking having an absurd or ruinous outcome.

The list of blunders that sysadmin often commits is long and probably deserved its own study. They are also aptly called SNAFU. And the only self-defense is acquiring the required knowledge. Here is where you own personal wiki can help.

Traditionally wiki is considered to be oriented of some audience. But this is not really necessary. Wiki can serve as an engine for organizing your own knowledge. More useful and flexible then blog or traditional lab journal or helpdesk system.   With the power of modern desktops Wiki can be installed in a virtual machine and used as a daily tool. Or it can be maintained on some cloud based server.

As Wikipedia had shown wiki-format allows systematizing vast amount of knowledge in relatively assessable form. Generally hyperlinked text is a very good tool for systematizing knowledge and the first implementations of hypertext were designed specifically for that purpose. Tags provide good structuring of knowledge into categories.  It remains a very good tool even now than there are multiple alternative. And what is interesting that raw HTML is more useful (and probably more powerful) that all those "all dancing, all singing" complex frameworks.

Modern operating systems are an extremely complex maze of complex subsystems and obscure features remembering which is simply impossible. And if you learn some feature and do not use for several months, you knowledge disappears. In a year you usually remember very little, and most important details are gone forever. If in a year you face the same problem again, you need to rediscover all the intricate details of your former solution again.  At this point some of your best sources might be gone or buried under ton of spam links in Google and other search engines (it's funny, but Bing sometimes is more useful then Google in Linux information searches)

That means that you always are in situation of the elephant and the blind. Some tools are not used often and all information about them is forgotten from one use to another.  So, when necessary, you need to relearn them, then, because you do not use them,  you forget all the information again. Another typical problem is that sometimes you get a nice source of information, the location of which you no longer remember or a tip that no longer is available on the Web.  And search engine does not help (Google is actually not very good as for finding relevant technical information and top links often are junk; paradoxically Bind or Yandex sometimes are better).  So important part of your knowledge is simply gone.

Sometimes you get a nice source of information, the location of which you no longer remember or a tip that no longer is available on the Web.  And search engine does not help (Google is actually not very good as for finding relevant technical information and top links often are junk; paradoxically Bind or Yandex something are better).  So important part of your knowledge is simply gone.

Yet another typical problem is that you saved important part of your protocol of work, say, with Teraterm, but now you can't find it using find or grep.  Linking such files to the your wiki helps immensely here. I was suffering from this problem for a long time until I learn the discipline of linking all such file into relevant web pages.

System administrators often spend hours trying to figure out some poorly documented task. The qulity of documentation from most vendors is dismal, if not worse. For example, it is not trivial to set HP ILO IP address on boot if the server, such as DL360 G7 came from other datacenter that uses different network. If you do not document this procedure and make it "findable" in your electronic connection of notes the next time you will step on the same rake.  And again need to spend several hours figuring the details.  especially if this happened two years after you company switched to other vendor, for example Dell or CISCO UCS,

Yet another problem is that corporate mail engine is usually brain dead (I am thinking about Lotus Notes here) but you can convert it into Web format and use some free Web mail engine or even Thunderbird as an important part of your knowledge base. A lot of useful information travels via corporate email and you need to find some way to reference and integrate most important mails into your personal knowledge database without excessive extra efforts.  I would like to warn you that forwarding corporate mail to external server is against company policy and you can face consequences if you try to make browsing of your mail easier on your smartphone and forward all or selected Webmail provider such as gmail, Yahoo mail or hotmail. Only corporate approved applications should be used.

Drowning in the ocean of information

The fact that there is too much information that a typical sysadmin nned to keep in his/her head greatly xceeed the capacity of such a head is undisputable. For a typical sysadmin that create situations which despite the fact that the problem he faces now was already  resolved in the past in several months it necessarily will be  treated as new -- as all knowledge about details of the resolution are lost.  Often even if you know that notes exist they are impossible to find. The chances became more slim as time interval grows. For example, if a typical when a similar situation happened a year ago or two years in most case you forgot important, often crusial details, even if finding them out cost you a lot of time and nerves. In such case you typically need to relearn and rediscover the solution to the problem and often step on the same rake, if the problem is tricky and documentation is incomplete or incorrect (which if often the case in all, but the most trivial cases; look like vendors want to keep you on a short leash, dependent of their tech support).  That's a very frustrating situation.

Vendor web sites and technical knowledge bases sometimes help, but good, informative for the particular situation materials  are often very difficult to find. Sometimes they are buried somewhere were even vendor web site search engine can't find them.  Sometimes they changed the title and or location in such a way that you can't find them  using search engine using the title or keywords and need a quote from a document. Sometimes the link  that you saved does not work anymore. This is a very typical situation for IBM, HP and Oracle. All three really like to move the material around, making your old links obsolete. Sometimes it looks like a strange, sadistic game they intentionally play with the users of their products, inflicting sufferings to test their loyalty  ;-) 

In any case, the situation when you can't find on vendor site the most valuable document even if you know that it exist, is pretty common. And search for it often requires a lot of time, sometimes hours, that are essentially lost. If you under pressure that increases the drama.  If you have both the recorded link (which is now invalid)  and a summary and a couple of quotes, your task is much simper and you can find document in several minutes if not seconds. And if you saved some personal notes from previous incident, may be you do not need the document  at all.

Enterprise helpdesk is not a help but a trap ;-)  

Typical way of systematizing such knowledge via help desk ticket system suffers from several shortcomings. First of all, all known to me corporate helpdesk systems are junk. It is difficult to work with them and that kills any desire to use them for systemizing knowledge. And secondly bureaucratic perversions that are typical in large enterprise helpdesk system implementation make it passionate object of hate. Looks like there is always a person in a large corporation, who have completely perverted ideas on how to use helpdesk and position where it can enforce his ideas on the users under the disguise of "improvements".  This in this case equivalent to creation of additional red tape. This situation is so typical, that it does not make any sense to write about it. I suspect that it reflects sole law under which large corporations operate similar to Parkinson Law, or Peter Principle. This is a natural feeling when you are forced to do something unnecessary or even stupid day after day. Another typical example here are so called "automatically generated tickets". 

Importance of creating and maintaining of personal database

That means that creation of your own personal knowledge base is of paramount importance. You can start with write-up of solved tickets and gradually expand and hyperlink it on a simple web site.

Simplicity at the very beginning is of paramount importance as if you overshoot, you soon abandon your project. I strongly recommend free version of Frontage with no wiki web engine you a starting point. In this case you do not even need apache or other server on your desktop. 

You need to understand that ease of editing and scheduling fair amount of time for recording and reviewing your activities and experience and much more important factor then selection of wiki engine.  Often you can do it early in the morning or late at night, when there are fewer distraction and you can concentrate on the task at hand. But you should do it each and every day, 

That means that this simple format will be adequate for at least a the first couple of years, in not a decade.  But as the amount of pages in your knowledgebase grows you eventually might need versioning, ability to tag content and search database using both tags and search phrases, maintain the list of most recent entries, the list of most often assessed entries and other helpers.

That is especially important if you need to support two of more flavors of Unix, for example Solaris and RHEL.  In this case you typically forget almost everything from one encounter with the problem to another on less used OS; your records are that only way to stay sane in this environment.   I have had a  pretty painful experience of supporting Linux and Solaris simultaneously at one point of my career.  At the beginning I hated Linux and as I learned more about Linux and forgot most what I learned about Solaris my feelings reversed ;-). Actually attachment to the best known OS in sysadmins has very strong emotional component (love-hate). And I now understand perfectly well those Linux sysadmin that passionately hate Solaris as well as those Solaris sysadmins who passionately hate Linux. Two systems of this level of complexity are  just too much for a single human being and hate is just a manifestation of frustration with experience of a lesser known OS in complex enterprise environment. See Solaris vs. Linux for details.

And if you understand that Linux in enterprise environment has split personality (RHEL and SLES) very soon you will hate your job as in no way sysadmin can be comfortable supporting three complex OS. Even two close OS such as RHEL and SLES are a huge challenge. Unless you find a constructive way to deal this this superhuman level of complexity. And I think creating your own knowledge base is the only constructive way to lessen this tremendous level of frustration with excessively complex environment in which Unix sysadmins needs to survive. And not only solve challenging problems, but solve then quickly.  

Here is what Sandra Stocker wrote about this problem in her IT World article (7 habits of highly successful Unix admins, April 05, 2014)

...If you do figure out why something broke, not just what happened, it's a good idea to keep some kind of record that you or someone else can find if the same thing happens months or years from now. As much as I'd like to learn from the problems I have run into over the years, I have too many times found myself facing a problem and saying "I've seen this before ..." and yet not remembered the cause or what I had done to resolve the problem. Keeping good notes and putting them in a reliable place can save you hours of time somewhere down the line.

... ... ...

In general, Unix admins don't like to document the things that they do, but some things really warrant the time and effort. I have built some complicated tools and enough of them that, without some good notes, I would have to retrace my steps just to remember how one of these processes works. ...In fact, I sometimes have to stop and ask myself "wait a minute; how does this one work?" Some of the best documentation that I have prepared for myself outlines the processes and where each piece is run, displays data samples at each stage in the process and includes details of how and when each process runs.

Digging out the information you used to write down, but now do not know were you put it

The key here is to have all you information as plain vanilla html of other "easy" text format. 

In this case you can use generaic tools for serach. Such seach can be done on multiple level

There are also more specialised tools.

Three types of serach

What is called unstructured file actually contains a lot of strcuture. Bused of this struture we can distinguish three typical types of infrmation about a given file: There is a special tag in html that allows encoding keywords'. The <meta> element within head tag can be  used to specify page description, keywords, author, and other metadata (HTML head Elements)
Define keywords for search engines:

<meta name="keywords" content="HTML, CSS, XML, XHTML, JavaScript">

Define a description of your web page:

<meta name="description" content="Free Web tutorials on HTML and CSS">

 Define the character set used:

<meta charset="UTF-8">

Define the author of a page:

<meta name="author" content="Hege Refsnes">

This meta tags are pretty useful is you put you information in relatively small HTML files. Otherwise they are less important.  In a way they resemble the header of good old email format.

Switching to more complex wiki engine

One well-supported Wiki engine that definitely is suitable for personal database is MediaWiki engine.  But the key problem with it that it use special wikiwiki  markup which I passionately hate. I do not consider it improvement over plain vanilla html. On the contrary I consider it less flexible and more obtuse.  Still loading your information into this format is not a wasted effort, but a very important learning activity. At the minimum  you will learn enough details to contribute something to Wikipedia. But again I am ambivalent about this path as I hate Wikiwiki markup language.

More on HTML vs. WikiWiki markup language

Typically wiki use simplified, very primitive markup language with the implicit (and wrong) assumption that HTML is too difficult.  Although this is a peripheral issue a lot of coding efforts are sunk into reinventing the bicycle.  At some point the wikiwiki markup language becomes a problem rather then solution, as outside very simple cases it is neither simple not transparent. That's actually a real problem with Wikipedia which had outgrown its humble beginnings and have has have wikiwiki as a legacy that created problems rather then being a solution to the problems. Wikipedia might be considered as a case where wikiwiki gradually became  worse then HTML  as substantial financial resources of Wikipedia foundation definitely allow it to improve and extend any high quality HTML open source editor (for example Netscape HTML editor).  There is plug-in that translates MediaWiki wikiwiki format into XHTML (Perl-XHTML-MediaWiki )

Some wikis have HTML metatag, so they can accept raw html. Wiki which have those capabilities are preferable to those which do not. Some have capability to export the page in pure HTML. This is another useful extension that has a real value.  

 At some point the wikiwiki markup language becomes a problem rather then solution, so wiki that accept html are preferable to those which do not (it is possible to convert to wiki formatting language from HTML with some loss of formatting).  That's actually a real problem with Wikipedia.

It's better if wiki stores content as regular UNIX files, rather than in a database

While usage of the database can help in implementation of such things as tags, generally it is more important to be able to process files using regular Unix tools without the necessity of extracting content from the database and then putting it back. Files-based wikis are scalable up to let's say 100K pages as such wikis are not used as intensively as Wikipedia.

Database (and not necessary relational one), if any,  should contain only metadata (we should learn something from NSA, should not we ;-)

Concept of categories and flat files representation

Categories (or tags) are used in blogs (and recently in some Web mail engines, like Gmail)  to create a non-relational database of posts. Each post can belong to several categories. In terms of flat files we can think about categories as softlinks from the "base" directory were post is stores to several other directories.   This way a post can be present in multiple "generated" pages.

Keeping users and colleagues informed

One of the most common (and the most devastating in its consequences) problem of large companies (and any large bureaucracies) is that left hand does not know what right hand is doing. this is true as for actions sysadmin and users as well as actions of different sysadmin on the same server (who can be on different continents, serving different shifts, for example German team and the US team).

If you use Frontpage the solution is trivial: you publish "public" section of your website and keep "private" section on your harddrive.

A similar approach is possible with wiki.

The problem of "non informed" users is the most acute and often lead to user actions which are easily preventable if they have just a little bit more information. Most users like to know when there is a new functionality available, or when the resources are down or not available. Such content be maintained corroboratively, and added/corrected by users not only by you and other sysadmins. This helps if not in eliminating, but at least lessening the "Kremlin tower syndrome" of over-centralized, outdated and insufficient Intranet content for users,  typical for corporate Web sites. 

E-mail communication has a deficiency that after email is read it is forgotten and or deleted.  Also sometimes it is not read at all. While web-based email client is useful and some standalone client like Thunderbird are quite powerful, systematization of information is difficult using e-mail.  It can and should be used and I find email to be a better tool of informing myself and users about changes then helpdesk engines, but eventually converting them to Wiki pages is a better way to systematize knowledge.

That does not mean that you should disclose too all the information to users. Some pages should be private and oriented only on sysadmins.

What is important is that such wiki should be developed collaboratively with the users. Just giving users the ability to participate lessen the frustration with the environment. 

Not only such "information portal" makes other administrators and users happier because they are better informed and not just "lurking in a dark".  It also prevents unnecessary helpdesk tickets, which makes your life as a sysadmin easier as well.

That means that efforts in creating such local information portal can pay off rather quickly.

Integration of blog capabilities into wiki

There is a growing trend to combine wiki technology with blogging capability to provide better collaboration and communication solutions.   SocialText and Wetpaint are two example in this direction. In addition, several of the existing content managing systems are beginning to add wiki and blog functions to their product roadmap. 

Pure wiki functionality is not enough and you need at least two additional functions: 

Another approach was described at WikiBlogIntegration:

Blogs and Wikis can be integrated easily if:

I appear to have gotten this to work with B2 and MoinMoin. But I had to hack MoinMoin quite a bit to get the mail in a reasonable format whereby I'd actually want it blogged.

I now also have a hack to B2 that automatically inserts links to wiki pages (with wiki names) that exist, but only if not in between angle brackets - I don't want to have links inside of links as nested anchors haven't been allowed for a long time in HTML land.

Vandalism and gradual deterioration of quality of publicly edited Wiki

The balance of author control/democracy in wikis is slated toward democracy and thus has some negative side effects like vandalism and "lowest common denominator" problem. 

The "lowest common denominator" problem is related to observable deterioration of initial high quality that some pages used to have, but which later were destroyed by enthusiastic but clueless followers with strong opinions. This problem of deterioration of pages quality with time is clearly visible is Wikipedia, but might be negligent in small wiki with limited audience. It is much less of a problem with internal informational portals. 

In Wikipedia those two effects in certain cases it led to such a serious deterioration of quality of certain technical pages that it sometimes called "GreyPedia" or "DummyPedia". It is unclear how to solve it, as less competent members of the community often have strong opinions and can type really fast ;-)

This problem is even more acute in "political" pages. Here Wikipedia "democratic" approach does not work at all. In certain case Wikipedia now is forced to limit registered users which can edit the most controversial pages. Also it is unclear how "three letter agencies" influence the content of such pages. That's why Wikipedia sometimes is called Ciapedia.

Despite this significant (and very difficult to solve) problem , wikis so far this proved to be a workable solution where abuse does not derail most projects.

Review of wiki engines written in Perl

Perl is the most Unix sysadmin friendly language so wiki engines written in it are far superior to any alternative unless you program in Python or PHP.  It is always better to have an engine in which you can make small code changes that to be just a user.

In this sense Wikipedia engine written in PHP sucks. It sucks despite providing the best combination of capabilities among free Wiki engines. 

If you know Python then there is an option of using several high quality Python based wiki engines. I like Trac.

TWiki

There are several open-source wikis with the revision control built-in  available, such as MediaWiki and MoinMoin, but TWiki has its own set of enthusiasts and strong following. This is a GPL product. The current version is 6.0.0.

One tremendous advantage is that it store pages as plain files (under version control) and does not use database.

It is available in the form of virtual machine too. The Web site claims that

It is a Structured Wiki, typically used to run a project development space, a document management system, a knowledge base, or any other groupware tool, on an intranet, extranet or the Internet.

... ... ...

Here is a non- comprehensive list of how TWiki is being used:

Some of TWiki's benefits are:

Installing TWiki is relatively easy, but still needs work. I hope, as the beta progresses, we will see improvements in ease of installation and upgrading along with clearer documentation.

First, you must create the directory where you want to install TWiki, say /var/www/wiki. Next, untar the TWiki distribution in that directory. Then you must make sure that the user with rights to run CGI scripts (usually apache or www-data), owns all of the files and is able to write to all files:

# install -d -o apache /var/www/wiki # cd /var/www/wiki # tar zxf /path/to/TWikiRelease2005x12x17x7873beta.tgz # cp bin/LocalLib.cfg.txt bin/LocalLib.cfg # vi bin/LocalLib.cfg lib/LocalSite.cfg # chown -R apache * # chmod -R u+w * 

Now copy bin/LocalLib.cfg.txt to bin/LocalLib.cfg, and edit it. You need to edit the $twikiLibPath variable to point to the absolute path of your TWiki lib directory, /var/www/wiki/lib in our case. You also must create lib/LocalSite.cfg to reflect your specific site information. Here is a sample of what might go into LocalSite.cfg:

# This is LocalSite.cfg. It contains all the setups for your local # TWiki site. $cfg{DefaultUrlHost} = "http://www.example.com"; $cfg{ScriptUrlPath} = "/wiki/bin"; $cfg{PubUrlPath} = "/wiki/pub"; $cfg{DataDir} = "/var/www/wiki/data"; $cfg{PubDir} = "/var/www/wiki/pub"; $cfg{TemplateDir} = "/var/www/wiki/templates"; $TWiki::cfg{LocalesDir} = '/var/www/wiki/locale';  

Here is a sample section for your Apache configuration file that allows this wiki to run:

 ScriptAlias /wiki/bin/ "/var/www/wiki/bin/" Alias /wiki "/var/www/localhost/wiki" <Directory "/var/www/wiki/bin"> Options +ExecCGI -Indexes SetHandler cgi-script AllowOverride All Allow from all </Directory> <Directory "/var/www/wiki/pub"> Options FollowSymLinks +Includes AllowOverride None Allow from all </Directory> <Directory "/var/www/wiki/data"> deny from all </Directory> <Directory "/var/www/wiki/lib"> deny from all </Directory> <Directory "/var/www/wiki/templates"> deny from all </Directory>

TWiki comes with a configure script that you run to set up TWiki. This script is used not only on initial install but also when you want to enable plugins later. At this point, you are ready to configure TWiki, so point your browser to your TWiki configure script, http://www.example.com/wiki/bin/configure.

You might be particularly interested in the Security section, but we will visit this shortly. Until you have registered your first user, you should leave all settings as they are. If the configure script gives any warnings or errors, you should fix those first and re-run the script. Once you click Next, you are prompted to enter a password. This password is used whenever the configure script is run in the future to help ensure no improper access.

Once you have completed the configuration successfully, it is time to enter the wiki. Point your browser to http://www.example.com/wiki/bin/view, and you are presented with the Main web. In the middle of the page is a link for registration. Register yourself as a user. Be sure to provide a valid e-mail address as the software uses it to validate your account. Once you have verified your user account, you need to add yourself to the TWikiAdminGroup. Return to the Main web and click on the Groups link at the left, and then choose the TWikiAdminGroup. Edit this page, and change the GROUP variable to include your new user name:

Set GROUP = %MAINWEB%.TiLeggett Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup  

The three blank spaces at the beginning of each of those lines are critical.

These two lines add your user to the TWikiAdminGroup and allow only members of the TWikiAdminGroup to modify the group. We are now ready to enable authentication for our wiki, so go back to http://www.example.com/wiki/bin/configure. Several options provided under the Security section are useful. You should make sure the options {UseClientSessions} and {Sessions}{UseIPMatching} are enabled. Also set the {LoginManager} option to TWiki::Client::TemplateLogin and {PasswordManager} to TWiki::Users::HtPasswdUser. If your server supports it, you should set {HtPasswd}{Encoding} to sha1. Save your changes and return to the wiki. If you are not logged in automatically, there is a link at the top left of the page that allows you to do so.

Now that you have authentication working, you may want to tighten down your wiki so that unauthorized people do not turn your documentation repository into an illicit data repository. TWiki has a pretty sophisticated authorization system that is tiered from the site-wide preferences all the way down to a specific topic. Before locking down the Main web, a few more tasks need to be done. Once only certain users can change the Main web, registering new users will fail. That is because part of the user registration process involves creating a topic for that user under the Main web. Dakar has a user, TWikiRegistrationAgent, that is used to do this. From the Main web, use the Jump box at the top left to jump to the WebPreferences topic. Edit the topic to include the following four lines and save your changes:

Set ALLOWTOPICRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBCHANGE = %MAINWEB%.TWikiAdminGroup, -->;%MAINWEB%.TWikiRegistrationAgent  

This allows only members of the TWikiAdminGroup to make changes or rename the Main web or update the Main web's preferences. It also allows the TWikiRegistrationAgent user to create new users' home topics when new users register. I have included a patch that you must apply to lib/TWiki/UI/Register.pm as well. The patch follows, but you can also download the patch from the LJ FTP site (see the on-line Resources):

--- lib/TWiki/UI/Register.pm.orig 2006-01-04 01:34:48.968947681 -0600 +++ lib/TWiki/UI/Register.pm 2006-01-04 01:35:48.999652157 -0600 @@ -828,11 +828,12 @@ my $userName = $data->{remoteUser} || $data->{WikiName}; my $user = $session->{users}->findUser( $userName ); + my $agent = $session->{users}->findUser( $twikiRegistrationAgent ); $text = $session->expandVariablesOnTopicCreation( $text, $user ); $meta->put( 'TOPICPARENT', { 'name' => $TWiki::cfg{UsersTopicName}} ); - $session->{store}->saveTopic($user, $data->{webName}, + $session->{store}->saveTopic($agent, $data->{webName}, $data->{WikiName}, $text, $meta ); return $log; }  

Otherwise, new users' home directories will fail to be created and new user registration will fail. Once you have verified that the Main web is locked down, you should do the same for the TWiki and Sandbox webs.

When you are done configuring TWiki, you should secure the files' permissions:

# find /var/www/wiki/ -type d -exec chmod 0755 {} ';' # find /var/www/wiki/ -type f -exec chmod 0400 {} ';' # find /var/www/wiki/pub/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/data/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/lib/LocalSite.cfg -exec chmod 0600 {} ';' # find /var/www/wiki/bin/ -type f -exec chmod 0700 {} ';' # chown -R apache /var/www/wiki/*  
As I mentioned before, TWiki has a plugin system that you can use. Many plugins are available from the TWiki Web site. Be sure the plugins you choose have been updated for Dakar before you use them.

See also

Foswiki

Foswiki is is a fork from TWiki 4.2.3. See Foswiki - Wikipedia, the free encyclopedia

Like TWiki it is written in in Perl 5 (5.8.8 or higher required). It contains powerful WYSIWYG editor which can be used instead of wikiwiki language.  Works with RCS version 5.7 or higher.

Has some interesting features like macros.

Creating a Table of Contents %TOC% automatically creates a table of contents based on the headings present in the topic. To exclude ...

DoxWiki

DoxWiki makes it easy for you to get a wiki up and running quickly. When installed on your computer, DoxWiki weighs in at just over 200KB.

The heart of DoxWiki is a a simple Web server that's written in Perl. To get going, all you have to do is start the Web server at the command line; it doesn't seem to like being launched from a desktop shortcut. Then, open the wiki's main page in your browser by typing http://localhost:8080 in the address bar.

Instead of saving content to a database, DoxWiki saves the individual files that make up the wiki on your hard drive. The files are small, so it would take quite a lot of them to put a dent in your drive's capacity.

Creating wiki pages is simple. On the main page (called the Wiki Root), you type a name for the new page in one of the fields, and then click the Go button. From there, you add content.

Unfortunately DoxWiki uses Wiki markup language 

DoxWiki also has a couple  of useful features: HTML export filter and a search engine.

One aspect of DoxWiki that I don't like is the default look of the pages. They're not ugly, but they're bland. You can add a custom logo to your wiki pages

ikiwiki

ikiwiki is a free, open source wiki application, designed by Joey Hess. It is licensed under the terms of the GNU General Public License, version 2 or later.

ikiwiki is written in Perl, although external plugins can be implemented in any language.

One advantage of ikiwiki is that stores its pages in a standard version control system such as Git, Subversion or others.

ikiwiki acts as wikiformat compiler and supports several lightweight markup languages, including Markdown, Creole, reStructuredText and Textile.

It also has Perl POD translator so it can use Perl POD format. (perl-Pod-IkiWiki )

In the simplest case, it can function as an off-line static web site generator, but it can use CGI to function as a normal web-interfaced wiki as well.[3] Login via OpenID is supported.

ikiwiki can be used for maintaining a blog, and includes common blogging functionality such as comments and RSS feeds. The installer includes an option to set up a simple blog at install time.[4]

ikiwiki is included in various Linux distributions, including Debian and Ubuntu.[3]

Use as a (possibly-distributed) bug tracker[edit]

Although wikis and bug tracking systems are conventionally viewed as distinct types of software, Ikiwiki can also be used as a (possibly-distributed) bug tracking system; however, "Ikiwiki has little structured data except for page filenames and tags," so its query functionality is not as advanced or as user-friendly as some other, centralised bug trackers such as Bugzilla.[5]

See also[edit]


Portal icon Free software portal
##Website Meta Language
##Gitit: Another wiki which uses a version control system to store pages

Abandonware

UseModWiki

UseModWiki is the original Wiki engine for wikipedia, written in Perl. Wikipedia used this engine for a while (From January 15, 2001 until early 2002)., before re-implementing it in PHP. Their current engine is PHP-based. .

UseModWiki is very simple to set up and upgrade. It has a rich syntax, and allows for arbitrary characters in page names. It also supports using some HTML tags instead of the WikiWiki markup. It has other nice features, including search, a list of recent changes, and page history.

For simple Perl-written Wikis, UseModWiki is a good choice. Codebase is not that complex and you can implement your own changes.  

Kwiki

The Kwiki motto is a "A Quickie Wiki that's not Tricky." this is an abandonware without a website.  http://www.kwiki.org/ is semi-dead. Download page works. Index of -downloads Installing it is pretty straightforward for a site you admin: just install the Perl package (from CPAN or elsewhere), and then type kwiki-install in a CGI-served directory to create an instance. Installing Kwiki on a server you are not an admin of is more complicated but doable.

I found the Kwiki markup not powerful. Some things are impossible with it, such as hyperlinking an arbitrary piece of text to an email address (mail fooish). I also could not find how to link a Wiki page with a text link different from the Wiki page name (like this link to LinkedWikiWikiPage). There is also no support for attachments, no support for HTML markup as an alternative to the Wiki markup, etc. It is disappointing.

Kwiki can use either RCS or Subversion for version control. Those who wish to use Subversion should check out the corrected Kwiki version as the CPAN Kwiki does not work with the up-to-date Subversion.) Kwiki is easily customizable and has several Kwiki enhancements available.

Generally, however, Kwiki is less powerful than TWiki.

WebKNotes

WebKNotes was an attempt to create a personal knowledge base. It's an old project that lasted probably till 2002. Now it is open source abandonware.  Here is the history:

This whole thing started because I wanted to store information in directories
as plain text and have something that made it accessible via WWW.

At first I made a simple Makefile, that looked at the extension and printed out
an HTML fragment based on the extension. "file2htmlf.mk"
Then I wrote a shell script, "dir2html.sh" that ran the makefile on every
file in the dir and wrapped and HTML title and header on it, the outputting
html went in '.index.html'.

Then dirs2html.sh recursivly called dir2html and put .index.html in ever
directory.

This was slow to use make, so I abandanded it and just wrote dir2html.csh,
that did a 'foreach' and switch.
Faster, but csh is slow too.
So next, it was dir2html.sh.

Next I turned the thing into a CGI script and generated HTML on the fly, 
for a directory passed to it.

At some point, I renamed it to 'notes2html' since was it really was a way
of accessing my notes about things, and not dirs in general.

When I put the thing on my web page, I realized that I needed a better 
way to control writes and to have the write suid, since CGI scripts run as
WWW, or nobody. 'doncreate' then later 'faccess'. 'faccess' was a C program
that restrited types of writes to directories, based on a faccess.conf file.
I used it to write logs, counters, as well as notes. 'faccess' was made to
be an suid Exec. This was now a very secure system, and I owned all the files
that came in.

Once the notes database had a few things in it, it was impossible to find
things in it (at least for an outsider). A search mechanism was needed badly.
At first, I wrote some ugly script that used 'find' and 'grep'. 
This was slow. I needed something that did them both. I didnt want to write
C code either, because that is something to recompile.

I found a search utility written in perl, for searching HTML trees.
Much hacking later, it work for notes2html.

Now I had this cool searchable knowledge database system and needed a name.
Knowledge Notes - KNOTES.

later I converted all of notes2html to perl .. notes2html.pl
Also, I added more perl scripts to subscribe/unscribe via email.

Got rid of use of faccess, as I was running through cgiwrap.

Then I cleaned it all up, modularized it a little, made all sysstem 
dependent defines in one knotes-define.pl file.

Then I finally put documentation with the thing.

1998, rewrote search script, made all the scripts work as setuid scripts.

----
Update 1999, January 13:

License is the Artistic License.
Renamed To WebKNotes, so as not to be confused with KDE's Knotes.

Summary

In essence Wiki is a poor man content management system and most wikies can be used like content management system. It is especially effective in creating user-oriented documentation and informing users about status of the systems. See Comparison of content management systems - Wikipedia, the free encyclopedia

Wiki engines that are written in Perl and that use Unix file system for storing content and allow usage of Html are preferable.

Despite the fact that it written in PHP, the current Wikipedia engine (MediaWiki) is good enough for the purpose discussed and can be recommended despite that fact that it does not meed two of three mentioned above criteria. As an added bonus you (and your users) learn the ropes which gives you possibility to contribute to Wikipedia, which despite all junk it contains and the problems of vandalism and the lowest common denominator is a useful, open tool, that we all should support.

TWiki and DoxWiki are two maintained Perl implementations. Both are big, complex software projects.

Most small modifiable by "regular folks" Perl Wiki projects are abandonware and you can re-use them only if you are ready to spend time learning the codebase and adapting it to your need. Throwing out wikiwiki markup and switching to HTML is the first recommended step. 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jul 03, 2020] Runbooks – Runbooks by Ian Miell

Notable quotes:
"... We are tired of haphazardly hunting through messy threads of GitHub issues and StackOverflow when we hit a problem ..."
"... We don't want a one-off fix, we want to deepen our understanding of the problem space ..."
"... We want to give people a resource where they can benefit from our experience without taking up our time ..."
Jul 03, 2020 | containersolutions.github.io

Runbooks The Manifesto

What Is A Runbook?

For this site, a Runbook:

zwischenzugs

The Runbooks Project – zwischenzugs

Previously, in 2017, I wrote about Things I Learned Managing Site Reliability for Some of the World's Busiest Gambling Sites . A lot of it focused on runbooks, or checklists, or whatever you want to call them (we called them Incident Models, after ITIL).

It got a lot of hits (mostly from HackerNews ), and privately quite a few people reached out to me to ask for advice on embedding similar. It even got name-checked in a Google SRE book .

Since then, I've learned a few more things about trying to get operational teams to follow best practice by writing and maintaining runbooks, so this is partly an update of that.

All these experiences have led me to help initiate a public Runbooks project to try and collect and publish similar efforts and reduce wasted effort across the industry.

tl;dr

We've set up a public Runbooks project to expose our private runbooks to the world.

We're looking for contributions. Do you have any runbooks lying around that could benefit from being honed by many eyes? The GitHub repo is here if you want to get involved, or contact me on Twitter .

Back to the lessons learned.

Things I Learned Since Things I Learned The Logic is Inarguable, the Practice is Hard

I already talked about this in the previous post , but every subsequent attempt I made to get a practice of writing runbooks going was hard going. No-one ever argues with the logic of efficiency and saved time, but when it comes to putting the barn up, pretty much everyone is too busy with something else to help.

In summary, you can't tell people anything . You have to show them, get them to experience it, or incentivise them to work on it.

Some combination of these four things is required:

With a prevailing wind, you can get away with less in one area, but these are the critical factors that seem to need to be in place to actually get results.

A Powerful External Force Is Often Needed

Looking at the history of these kind of efforts , it seems that people need to be forced – against their own natures – into following these best practices that invest current effort for future operational benefit.

Examples from The Checklist Manifesto included:

In the case of my previous post, it was frustration for me at being on-call that led me to spend months writing up runbooks. The main motivation that kept me going was that it would be (as a minimal positive outcome) for my own benefit . This intrinsic motivation got the ball rolling, and the effort was then sustained and developed by the other three more process-oriented factors.

There's a commonly-seen pattern here:

If you crack how to do that reliably, then you're going to be pretty good at building businesses.

It Doesn't Always Help

That wasn't the only experience I had trying to spread what I thought was good practice. In other contexts, I learned, the application of these methods was unhelpful.

In my next job, I worked on a new and centralised fast-changing system in a large org, and tried to write helpful docs to avoid repeating solving the same issues over and over. Aside from the authority and 'critical mass' problems outlined above, I hit a further one: the system was changing too fast for the learnings to be that useful. Bugs were being fixed quickly (putting my docs out of date similarly quickly) and new functionality was being added, leading to substantial wasted effort and reduced benefit.

Discussing this with a friend, I was pointed at a framework that already existed called Cynefin that had already thought about classifying these differences of context, and what was an appropriate response to them. Through that lens, my mistake had been to try and impose what might be best practice in a 'Complicated'/'Clear' context to a context that was 'Chaotic'/'Complex'. 'Chaotic' situations are too novel or under-explored to be susceptible to standard processes. Fast action and equally fast evaluation of system response is required to build up practical experience and prepare the way for later stabilisation.

'Why Don't You Just Automate It?'

I get this a lot. It's an argument that gets my goat, for several reasons.

Runbooks are a useful first step to an automated solution

If a runbook is mature and covers its ground well, it serves as an almost perfect design document for any subsequent automation solution. So it's in itself a useful precursor to automation for any non-trivial problem.

Automation is difficult and expensive

It is never free. It requires maintenance. There are always corner cases that you may not have considered. It's much easier to write: 'go upstairs' than build a robot that climbs stairs .

Automation tends to be context-specific

If you have a wide-ranging set of contexts for your problem space, then a runbook provides the flexibility to applied in any of these contexts when paired with a human mind. For example: your shell script solution will need to reliably cater for all these contexts to be useful; not every org can use your Ansible recipe; not every network can access the internet.

Automation is not always practicable

In many situations, changing or releasing software to automate a solution is outside your control or influence .

A Public Runbooks Project

All my thoughts on this subject so far have been predicated on writing proprietary runbooks that are consumed and maintained within an organisation.

What I never considered was gaining the critical mass needed by open sourcing runbooks, and asking others to donate theirs so we can all benefit from each others' experiences.

So we at Container Solutions have decided to open source the runbooks we have built up that are generally applicable to the community. They are growing all the time, and we will continue to add to them.

Call for Runbooks

We can't do this alone, so are asking for your help!

However you want to help, you can either raise a PR or an issue , or contact me directly .

[Dec 24, 2018] Sysadmin and Documenation Entries in Life

Dec 24, 2018 | hexmode.com

Documentation is very important. I started a new SysAdmin gig a couple of months ago and the people here did a good job of documentation. A lot is documented about the systems themselves and what sort of maintenance contracts we have and that sort of thing. All this is good stuff.

But: What is not documented is the relationships and dependencies between the various sites at this company (at least on the Unix side of the house). They are spread out all over the place: Canada, India, Texas, Louisiana, D.C.

The problem comes in because the administration for DNS and Sendmail was done without documentation.

Then, the time came to upgrade DNS . Management got wind of this problem and decided that this was a problem of some urgency. Nevermind that their main DNS and mailserver was running an un-patched copy of Solaris with the RPC portmapper open to the world -- this problem needed to be fixed now .

The first time through, I discovered that they were depending on internal MX records in DNS to do mail routing. Uh wrong ! So, I prepared to take out the internal MX records. However, this meant that I had to change the sendmail configuration. Since they were running an old, unpatched copy of that, I decided to upgrade sendmail as well. I set up a mailertable and tried to get all the internal MX records into it. In the process, I discovered some relatively unknown machines running SMTP. You'd think they'd want to get rid of them if no one knew about them, eh? But no, the political climate (and some special people) guaranteed that they would stay.

I was able to clean up DNS a bit as a result of this upgrade. I had to; the new bind was far more sensitive about configuration problems than the older bind.

After extensive testing, I put the changes in place. It took longer than expected -- things always do -- but it got done.

Oops! There was no checklist of things to make sure that everything was done right (and this was a rush project, so there was no time to create one), so 6000 users lost their mail for about 12 hours.

Of course, a bigger deal was made of it than was necessary. It was a big deal, but really, no one believed the specter of lost sales of a nuclear power plant because email was down.

Finally, though, all the problems were fixed. What were the lessons I learned?

[Dec 24, 2018] MediaWiki as a community resource

Dec 24, 2018 | hexmode.com

Those of us in the MediaWiki Stakeholders have asked for a meeting with people at the Wikimedia Foundation during the upcoming developer's summit . As is only to be expected, Brion asked:

In T119403#1826003 , @brion wrote: What sort of outcomes are you looking for in such a meeting? Are you looking to meet with engineers about technical issues, or managers to ask about formally committing WMF resources?

I copy-pasted Chris Koerner's response:

But I couldn't let it stop there, so I went into rant mode.

Since it seems that some people involved in the shared hosting/non-technical office hour weren't aware of us -- "they don't report bugs" was said over and over and just isn't true @cicalese , for example, has been struggling with submitting code -- we do contribute and we have a huge investment in the future of MW.

There are a number of large users -- NASA, NATO, Pfizer, oil companies, Medical providers and medical researchers, various government agencies as well as the numerous "less serious" game-based wikis. The list goes on.

All of these uses are not controlled by the Foundation, but they do feed the mission statement of the WMF by providing a tool that people use to " empower and engage people around the world to collect and develop educational content and to disseminate it effectively and globally. "

Even if the content isn't released in the public domain (e.g. it is kept "in house"), it trains people to use the MediaWiki software and allows them to share there knowledge where it is appreciated, even when that knowledge isn't notable enough for a project with Wikipedia's aspirations.

The problem, as I see it, is one of direction and vision. Should WMF developers continue to only be concerned with those who have knowledge to share that the Wikipedia communities allow, or should their efforts enable people to share less note-worthy knowledge that -- while it doesn't meet the bar set for Wikipedia -- is still part of the sum of all human knowledge that it is WMF's vision to ensure everyone has access to.

It's true, some organisations will set up wikis that are not publicly accessible. Even the WMF has some non-public wikis. The Wiki, though, is an amazing tool for publishing knowledge and people have seen the potential (through Wikipedia) of this idea of providing a knowledge sharing tool where " anyone can edit ."

Without engaging those people who use MediaWiki outside of the WMF, the WMF is missing out on a huge amount of feedback on the software and interesting uses for it that the Foundation hasn't thought of.

There's a virtuous cycle that the Foundation is missing out on.

Simple flat file CMS (or blog-wiki) engine written in Perl - Software Recommendations

See the wikimatrix.org/search.php .
Stack Exchange
jm666

Looking for a CMS (or blog or wiki) engine what has the following attributes:

  1. written in Perl (modern style)
  2. uses Plack (server Starman or such)
  3. flat file engine, e.g. requires no external database engine (read: allows grepping, and vi-editing on the content)
  4. correctly allows/handles UTF-8 content
  5. have batteries already included, so uses:
    • most commonly used JavaScript libs (Lightbox and like)
    • any of currently mostly used CSS frameworks, like Twitter Bootstrap or Foundation or like (and/or allows easy template adoption)
  6. and have its own plugin API - so one can easily write plugins into it
  7. not abandoned

And would be nice if it has a built-in search engine.

So, simple CMS/blog/wiki what can be used as sort of publishing platform.

Is there something in the Perl world what fulfills the above 7 criteria?

What I already know

Foswiki would be great, because:

but:

Raystafarian Sep 23 '14 at 9:05

aneuch, ikiwiki, kehei, podwiki, prowiki, twiki, walawiki might be worth checking out. I don't know about all 7 requirements, so I can't leave an answer.

jm666 Sep 23 '14 at 9:25

@Raystafarian Thanx remembering me to re-chech again the wikimatrix.org/search.php . Maybe something is new. ;)

Wiki Engine Popularity

The following GoogleSearch hit count result list may give insight into the popularity and spread of the various WikiWikiClones.

Although the raw search result numbers may not be an absolute measure of popularity, they can be used to rank relative popularity. The list is cut off at around 50 entries and some frivoulous entries (InternetExplorer, WikiWord, ...) were removed (maybe I've cut out some real WikiEngines as well).


Also see TopTenWikiEngines for a short list of the best WikiEngines based on more subjective opinions.
Another survey done for the WikiCreole workshop at WikiSym August 14, 2006: http://www.wikicreole.org/wiki/WikiPopularity
For comparison, Google yields 12,700,000 for "wiki" and 359,000 for "wikiwiki". Unknown how many of those are purely use of the Hawaiian word.
MediaWiki
76,900000 (2008.7.25)
TWiki (TwikiClone)
4,410,000 (2008.7.25)
TikiWiki
4,190,000 (2008.7.25)
PukiWiki
4,180,000 (2009.7.25)
MojoMojo
3,640,000 (2009-01-22)

[Jan 30, 2007] Project details for Salonify

freshmeat.net

Salonify is a Perl script which displays images that you have organized in a directory hierarchy.

The Web user can choose ormation and collaborating on projects. They also make excellent personal information managers. With a personal wiki, all of your to-do lists, notes, and appointments are at your fingertips in form that's easy to use and maintain.

The problem with most wikis, such as MediaWiki (the engine that powers Wikipedia) is that they take a lot of effort to set up and maintain. You have to deal with not only the wiki software itself but also the Web server and database that underlie the wiki. All of that is overkill for anyone who wants a wiki for strictly personal use.

But there are several applications available to someone who wants to get a wiki working quickly as a desktop tool. They don't require much, if any, configuration.

DoxWiki

DoxWiki makes it easy for you to get a wiki up and running quickly. When installed on your computer, DoxWiki weighs in at just over 200KB.

The heart of DoxWiki is a a simple Web server that's written in Perl. To get going, all you have to do is start the Web server at the command line; it doesn't seem to like being launched from a desktop shortcut. Then, open the wiki's main page in your browser by typing http://localhost:8080 in the address bar.

Instead of saving content to a database, DoxWiki saves the individual files that make up the wiki on your hard drive. The files are small, so it would take quite a lot of them to put a dent in your drive's capacity.

Creating wiki pages is simple. On the main page (called the Wiki Root), you type a name for the new page in one of the fields, and then click the Go button. From there, you add content.

Wikis use a markup language based on common keyboard symbols that format text, links (both to other wiki pages and Web sites), and elements on a page. If you don't know wiki markup, DoxWiki doesn't leave you hanging. It comes with a lengthy guide to the markup and how to use it.

DoxWiki also has a couple of other useful features: a nifty export filter and a search engine.

One aspect of DoxWiki that I don't like is the default look of the pages. They're not ugly, but they're bland. While you can add a custom logo to your wiki pages (mine's a frog), I couldn't figure out how to modify the look and feel of the wiki pages.

Going small and simple

If you need a portable and easy-to-use wiki, then you can't get any simpler than Wiki on a Stick and TiddlyWiki. Both Wiki on a Stick and TiddlyWiki are designed to be used on your desktop or to be carried on a USB thumb drive. They're simply HTML files that use CSS and JavaScript to provide formatting and the ability to add pages to your wiki. Well, you're not exactly adding pages as you would in a traditional wiki. Instead, the new content is just appended to the HTML file, and hidden until you click the link to jump to the content.

According to its developer, Wiki on a Stick "can be used as a personal notepad, calendar, repository for software documentation, and many other things." The beauty of Wiki on a Stick is that it is simple. The interface is uncluttered, almost bland. It consists of a heading, a navigation menu, an area for text, and a set of icons. You can easily create and edit pages by clicking one of the icons. When a new version of Wiki on a Stick comes out, you can quickly import the contents of your current wiki to the new version.

Adding and editing content is a breeze. Wiki on a Stick supports a variant of the standard wiki markup -- for example, you enter a + instead of a * to create a bullet. Whenever you edit content, a list of the supported markup appears at the bottom of the page. If you've never used a wiki before, then it might take a bit of time to adapt. If not, then you shouldn't have any trouble learning the formatting codes.

You can edit a Wiki on a Stick with Firefox, Mozilla, and Internet Explorer. While you can browse a Wiki on a Stick with Opera, you won't be able to edit it. Using Konqueror is out of the question, unfortunately. You can also edit the CSS from within the wiki to change its look and feel. If you plan to put the wiki on the Web as a static page, you can configure it so that the edit icon is hidden.

TiddlyWiki

TiddlyWiki is flashier than Wiki on a Stick. It follows the same principles as that application, but does so with a little more pizazz. For example, when you click a link to jump to some wiki content, an in-your-face JavaScript transition brings that content to the top of the page. You can turn that animation off if it bugs you. TiddlyWiki also has a simple built-in search engine that does the job.

TiddlyWiki divides content into two types: Tiddlers and Journals. Tiddlers are general wiki entries -- ideas, notes, to-do lists, or whatever else you want them to be. Journals, on the other hand, are notes that are specific to a day. While I was experimenting with TiddlyWiki, I used Journals to track specific tasks that I needed to do on a particular day, and used one as a personal diary.

You can configure several options in TiddlyWiki. You can set it up to do automatic saves and to create backups. You can also enable regular expression and case-sensitive searches, as well as generate an RSS feed. The latter is useful if you plan to post your TiddlyWiki on the Web. Unlike Wiki on a Stick, though, you can't change the look and feel of TiddlyWiki from the interface. You either have to edit the TiddlyWiki code, or create some sort of custom theme. The TiddlyWiki Web site leads you through that process.

TiddlyWiki has spawned a number of variants. These include GTD TiddlyWiki (aimed at those who follow the Getting Things Done method of personal productivity) and QwikWeb (which is meant to be deployed on a Web site). So, if TiddlyWiki doesn't quite suit your needs, you might be able to find a variant that does.

Unlike Wiki on a Stick, you can view a TiddlyWiki with just about any desktop Web browser, and on the Nokia 770 Internet Tablet. You can edit the content of a TiddlyWiki on a wider range of browser than that supported by Wiki on a Stick: Firefox, Internet Explorer, Safari, and Camino among them. On top of that, you can extend TiddlyWiki with several plugins. See the TiddlyWiki Web site for more information.

Conclusion

Wikis are great tools for capturing and sharing personal information. For personal use, you don't need to worry about maintaining a Web server or database. You can start using these personal wikis almost immediately, without getting your hands dirty configuring and maintaining the supporting software.

In Defense of Wiki Markup

They re-invented the bicycle and did it badly. Compare with a report on Auctionbytes. eBay Blogs will enable sellers to more efficiently market their products. eBay Wikis meanwhile collect fact-based articles written and maintained by eBay Community members. Both tools will be launched at the eBay Live conference in Las Vegas June 13 - 15.

Working off the Auctionbytes story, I dug a little deeper and found more details including a tag/search platform. The eBay Blog help pages here and the wiki information pages here. In addition, as you can see from the screenshot below and the links on the help pages, Skype integration is coming soon too.

WikiBlogIntegration - AndrewSW Pages One simple way of integrating wikis and blogs

Blogs and Wikis can be integrated easily if:

I appear to have gotten this to work with B2 and MoinMoin. But I had to hack MoinMoin quite a bit to get the mail in a reasonable format whereby I'd actually want it blogged.

I now also have a hack to B2 that automatically inserts links to wiki pages (with wiki names) that exist, but only if not in between angle brackets - I don't want to have links inside of links as nested anchors haven't been allowed for a long time in HTML land.

WikyBlog - A Wiki - Blog Hybrid

wikis-the-insiders-guide

Wiki Wednesdays
SocialText picked up and implemented the idea of Wiki Wednesdays. On the first Wednesday of each month there is usually a meeting held in a few locations in the USA and Canada, and one or two locations in Europe. People interested in the wiki technology and approach get together to share their experiences and ideas, or make contacts to get help from the experts. You can find out about the latest and recent events here.

Using Wikis and Blogs to Ease Administration By Ti Leggett

2006-02-27 | Linux Journal

This tutorial on TWiki and WordPress shows how wikis and blogs can be useful for system administration and documentation. System administration can be like sailing a ship. You must keep your engines running smoothly, keep your crew and the harbors notified and up to date and also maintain your Captain's log. You must keep your eye on the horizon for what is coming next. Two technologies have emerged over the past few years that could help keep you on course, wikis and blogs.

Maintaining Good Documentation

I find that one of the most difficult aspects of system administration is keeping documentation accurate and up to date. Documenting how you fixed a pesky problem today will help you remember how to fix it months later when it occurs again. If you ever have worked with others, you realize how critical good documentation is. Even if you are the only system administrator, you still will reap the benefits of good documentation, even more so if another sysadmin is ever brought on board.

Some goals of a good documentation system should be:

Unfortunately, keeping your documentation up to date can be a full-time job in itself. Documenting, though not a very glamorous task, certainly will pay off in the long run.

Why a Wiki?

This is where a wiki comes in. From Wikipedia: "a wiki is a type of Web site that allows users to add and edit content and is especially suited for constructive collaborative authoring."

What this means is a wiki allows you to keep and edit your documentation in a central location. You can access and edit that documentation regardless of the platform you are using. All you need is a Web browser. Some wikis have the ability to keep track of each revision of a changed document, so you can revert to a previous version if some errant changes are made to a document. The only obstacle a new user must overcome is learning the particular markup language of your wiki, and sometimes even this is not completely necessary.

One of a wiki's features is also one of its drawbacks. Wikis are pretty free flowing, and although this allows you to concentrate on getting the documentation written quickly, it can make organization of your wiki rapidly spiral out of control. Thought needs to be put into how the wiki is organized, so that topics do not get stranded or lost. I have found that making the front page a table of contents of all the topics is very handy. However you decide to organize your wiki, make sure it is well understood by everyone else. In fact, a good first document might be the policy describing the organization of the wiki!

TWiki

There are several open-source wikis available, such as MediaWiki and MoinMoin, each with its own philosophy on markup and layout, but here we concentrate on TWiki. Some of TWiki's benefits are:

The most current stable release at this time is Cairo, or TWiki20040904. It was released, as the name suggests, on September 4, 2004, and it has been proven to be very stable. However, it does lack some of the features of the current beta release, Dakar, that I find to be very useful. The Dakar release we use here is TWikiRelease2005x12x17x7873beta.

Installing TWiki is relatively easy, but still needs work. I hope, as the beta progresses, we will see improvements in ease of installation and upgrading along with clearer documentation.

First, you must create the directory where you want to install TWiki, say /var/www/wiki. Next, untar the TWiki distribution in that directory. Then you must make sure that the user with rights to run CGI scripts (usually apache or www-data), owns all of the files and is able to write to all files:

# install -d -o apache /var/www/wiki # cd /var/www/wiki # tar zxf /path/to/TWikiRelease2005x12x17x7873beta.tgz # cp bin/LocalLib.cfg.txt bin/LocalLib.cfg # vi bin/LocalLib.cfg lib/LocalSite.cfg # chown -R apache * # chmod -R u+w * 

Now copy bin/LocalLib.cfg.txt to bin/LocalLib.cfg, and edit it. You need to edit the $twikiLibPath variable to point to the absolute path of your TWiki lib directory, /var/www/wiki/lib in our case. You also must create lib/LocalSite.cfg to reflect your specific site information. Here is a sample of what might go into LocalSite.cfg:

# This is LocalSite.cfg. It contains all the setups for your local # TWiki site. $cfg{DefaultUrlHost} = "http://www.example.com"; $cfg{ScriptUrlPath} = "/wiki/bin"; $cfg{PubUrlPath} = "/wiki/pub"; $cfg{DataDir} = "/var/www/wiki/data"; $cfg{PubDir} = "/var/www/wiki/pub"; $cfg{TemplateDir} = "/var/www/wiki/templates"; $TWiki::cfg{LocalesDir} = '/var/www/wiki/locale';  

Here is a sample section for your Apache configuration file that allows this wiki to run:

 ScriptAlias /wiki/bin/ "/var/www/wiki/bin/" Alias /wiki "/var/www/localhost/wiki" <Directory "/var/www/wiki/bin"> Options +ExecCGI -Indexes SetHandler cgi-script AllowOverride All Allow from all </Directory> <Directory "/var/www/wiki/pub"> Options FollowSymLinks +Includes AllowOverride None Allow from all </Directory> <Directory "/var/www/wiki/data"> deny from all </Directory> <Directory "/var/www/wiki/lib"> deny from all </Directory> <Directory "/var/www/wiki/templates"> deny from all </Directory>

TWiki comes with a configure script that you run to set up TWiki. This script is used not only on initial install but also when you want to enable plugins later. At this point, you are ready to configure TWiki, so point your browser to your TWiki configure script, http://www.example.com/wiki/bin/configure. You might be particularly interested in the Security section, but we will visit this shortly. Until you have registered your first user, you should leave all settings as they are. If the configure script gives any warnings or errors, you should fix those first and re-run the script. Once you click Next, you are prompted to enter a password. This password is used whenever the configure script is run in the future to help ensure no improper access.

Once you have completed the configuration successfully, it is time to enter the wiki. Point your browser to http://www.example.com/wiki/bin/view, and you are presented with the Main web. In the middle of the page is a link for registration. Register yourself as a user. Be sure to provide a valid e-mail address as the software uses it to validate your account. Once you have verified your user account, you need to add yourself to the TWikiAdminGroup. Return to the Main web and click on the Groups link at the left, and then choose the TWikiAdminGroup. Edit this page, and change the GROUP variable to include your new user name:

 Set GROUP = %MAINWEB%.TiLeggett Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup  

The three blank spaces at the beginning of each of those lines are critical.

These two lines add your user to the TWikiAdminGroup and allow only members of the TWikiAdminGroup to modify the group. We are now ready to enable authentication for our wiki, so go back to http://www.example.com/wiki/bin/configure. Several options provided under the Security section are useful. You should make sure the options {UseClientSessions} and {Sessions}{UseIPMatching} are enabled. Also set the {LoginManager} option to TWiki::Client::TemplateLogin and {PasswordManager} to TWiki::Users::HtPasswdUser. If your server supports it, you should set {HtPasswd}{Encoding} to sha1. Save your changes and return to the wiki. If you are not logged in automatically, there is a link at the top left of the page that allows you to do so.

Now that you have authentication working, you may want to tighten down your wiki so that unauthorized people do not turn your documentation repository into an illicit data repository. TWiki has a pretty sophisticated authorization system that is tiered from the site-wide preferences all the way down to a specific topic. Before locking down the Main web, a few more tasks need to be done. Once only certain users can change the Main web, registering new users will fail. That is because part of the user registration process involves creating a topic for that user under the Main web. Dakar has a user, TWikiRegistrationAgent, that is used to do this. From the Main web, use the Jump box at the top left to jump to the WebPreferences topic. Edit the topic to include the following four lines and save your changes:

Set ALLOWTOPICRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBCHANGE = %MAINWEB%.TWikiAdminGroup, -->;%MAINWEB%.TWikiRegistrationAgent  

This allows only members of the TWikiAdminGroup to make changes or rename the Main web or update the Main web's preferences. It also allows the TWikiRegistrationAgent user to create new users' home topics when new users register. I have included a patch that you must apply to lib/TWiki/UI/Register.pm as well. The patch follows, but you can also download the patch from the LJ FTP site (see the on-line Resources):

--- lib/TWiki/UI/Register.pm.orig 2006-01-04 01:34:48.968947681 -0600 +++ lib/TWiki/UI/Register.pm 2006-01-04 01:35:48.999652157 -0600 @@ -828,11 +828,12 @@ my $userName = $data->{remoteUser} || $data->{WikiName}; my $user = $session->{users}->findUser( $userName ); + my $agent = $session->{users}->findUser( $twikiRegistrationAgent ); $text = $session->expandVariablesOnTopicCreation( $text, $user ); $meta->put( 'TOPICPARENT', { 'name' => $TWiki::cfg{UsersTopicName}} ); - $session->{store}->saveTopic($user, $data->{webName}, + $session->{store}->saveTopic($agent, $data->{webName}, $data->{WikiName}, $text, $meta ); return $log; }  

Otherwise, new users' home directories will fail to be created and new user registration will fail. Once you have verified that the Main web is locked down, you should do the same for the TWiki and Sandbox webs.

When you are done configuring TWiki, you should secure the files' permissions:

# find /var/www/wiki/ -type d -exec chmod 0755 {} ';' # find /var/www/wiki/ -type f -exec chmod 0400 {} ';' # find /var/www/wiki/pub/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/data/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/lib/LocalSite.cfg -exec chmod 0600 {} ';' # find /var/www/wiki/bin/ -type f -exec chmod 0700 {} ';' # chown -R apache /var/www/wiki/*  
As I mentioned before, TWiki has a plugin system that you can use. Many plugins are available from the TWiki Web site. Be sure the plugins you choose have been updated for Dakar before you use them.

Putting MediaWiki to use in an organization

NewsForge

Imagine how useful it would be to have an online knowledge base that can easily be updated created by key people within your organization. That's the promise of a wiki -- a Web application that "allows users to easily add, remove, or otherwise edit all content, very quickly and easily," as Wikipedia, perhaps the best-known wiki, puts it. Why not bring the benefits of a wiki to your organization?

If you're sold on the concept, the first thing you need to do is to pick the software that you're going to use for your wiki. If you want hunt around to find out what's out there, a good place to start is Wikipedia's wiki software wiki. If you say, "I'll use whatever Wikipedia is using," that'll be MediaWiki.

MediaWiki installation is easy -- either follow the instructions on MediaWiki's site or read "The open source wiki behind Wikipedia." Install MediaWiki on a server that can be seen by everyone in your organization. You'll then be able to access it from a Web browser by typing in something like http://servername/wiki.

With a brand new wiki there's absolutely no security or control built into it. Anyone that can access the Web page will be able to add pages, comments, and discussions. We're going to stop that. First add a new user account -- you'll need to be able to log on once you've disabled anonymous access. Next, find the LocalSettings.php file in your wiki directory. Add the following lines:

$wgGroupPermissions['*']['createaccount'] = false;
$wgGroupPermissions['*']['edit'] = false;
$wgShowIPinHeader = false;

With that done, anyone on the network will be able to view the wiki, but only someone with an account will be able to create or edit pages.

You may also want to enhance the wiki pages by adding PHP functionality. To do this, add a function into the includes/Setup.php file:

function ParsePHPTag($Content)
{
 global $wgOut;
 $wgOut->enableClientCache(false);
 ob_start();
 eval($Content);
 $Result = ob_get_contents();
 ob_end_clean();
  return($Result);
}
$wgParser->setHook('php','ParsePHPTag');

Then, if you want to use PHP in any of your wiki pages, don't use the normal <?PHP ... ?> tags; instead use <PHP> ... </PHP>.

Now you can even access data in a MySQL database by adding code like this to a wiki page:

<PHP>

$db = mysql_connect("localhost", "userid", "userpassword");
mysql_select_db("cstellar",$db);

$result = mysql_query("SELECT COUNT(*) stars FROM chyg85",$db);

printf("Records: %s\n", mysql_result($result,0,"stars"));
</PHP>

In this example, all I'm doing is connecting to a database and counting the number of records in a table. Obviously you'd have to use your own database and user details.

MediaWiki is based on PHP, and so as well as being able to use any PHP functionality within a page, you can actually build your own extensions to MediaWiki. If you're interested in doing that, have a look at MediaWiki's documentation on extending wiki markup.

While you're setting parameters, look at your php.ini file. In php.ini, the line session.gc_maxlifetime sets the length of time (in seconds) that a PHP session is allowed to exist, like this:

session.gc_maxlifetime = 1440

This means that if you're editing the wiki then you must click on the "Save page" button at least once every 24 minutes or risk losing your work. You can increase the time to a value that will suit you better -- say to one hour, or 3600 seconds.

At this point you may be saying, "There's nothing here that I can't do with a text editor, an ordinary Web server, and giving group access to the Web files." True -- so let's see where the wiki comes into its own. Try editing the Main page, save the changes, and then click on the History tab. You'll see that MediaWiki tracks who made all changes and when. You can compare the differences between different versions. In one fell swoop you've got yourself a document management system as well as a potential in-house knowledge base.

"Aha!" I hear you say, "if this is just operating in a browser then how can I do spell check or word counts? What about formating?" If you use Firefox as your browser, you can add Firefox extensions to implement those functions. If you're using Firefox 1.5.x, install Spellbound dev and restart Firefox. When you then try editing one of your wiki pages, you'll find that misspelled words are underlined in red. Right-clicking in any editing areas (text boxes, for example) will allow you to display the spell check front end. Once there then it's just like spell checking in any other application.

It's just as easy to get a word count going, this time use roachfiend.com's Word Count. Again, don't forget to restart Firefox after installing the extension. However, the word count doesn't work within the text editing areas. to get around that problem, click MediaWiki's "Show Preview" button tp see your work displayed as a normal Web page. Now you can then select any area of the text, right-click on it, and you'll see that "Word Count" function is available. Click on it to see the number of words in a message box.

Finally you can install a WYSYWIG HTML editor called Xinha Here! Both the spell check and word count extensions also work in the Xinha window.

With MediaWiki set up, you're ready to create your knowledge base; I can't help you there, it's all up to you. MediaWiki and the Firefox extensions have enhanced the way that I do my day-to-day work, and I'm sure that they can revolutionize the flow of information and knowledge around your organization.

[Jun 21, 2006] Assorted findings:

[Feb 27, 2006] Using Wikis and Blogs to Ease Administration By Ti Leggett

2006-02-27 | Linux Journal

This tutorial on TWiki and WordPress shows how wikis and blogs can be useful for system administration and documentation. System administration can be like sailing a ship. You must keep your engines running smoothly, keep your crew and the harbors notified and up to date and also maintain your Captain's log. You must keep your eye on the horizon for what is coming next. Two technologies have emerged over the past few years that could help keep you on course, wikis and blogs.

Maintaining Good Documentation

I find that one of the most difficult aspects of system administration is keeping documentation accurate and up to date. Documenting how you fixed a pesky problem today will help you remember how to fix it months later when it occurs again. If you ever have worked with others, you realize how critical good documentation is. Even if you are the only system administrator, you still will reap the benefits of good documentation, even more so if another sysadmin is ever brought on board.

Some goals of a good documentation system should be:

Unfortunately, keeping your documentation up to date can be a full-time job in itself. Documenting, though not a very glamorous task, certainly will pay off in the long run.

Why a Wiki?

This is where a wiki comes in. From Wikipedia: "a wiki is a type of Web site that allows users to add and edit content and is especially suited for constructive collaborative authoring."

What this means is a wiki allows you to keep and edit your documentation in a central location. You can access and edit that documentation regardless of the platform you are using. All you need is a Web browser. Some wikis have the ability to keep track of each revision of a changed document, so you can revert to a previous version if some errant changes are made to a document. The only obstacle a new user must overcome is learning the particular markup language of your wiki, and sometimes even this is not completely necessary.

One of a wiki's features is also one of its drawbacks. Wikis are pretty free flowing, and although this allows you to concentrate on getting the documentation written quickly, it can make organization of your wiki rapidly spiral out of control. Thought needs to be put into how the wiki is organized, so that topics do not get stranded or lost. I have found that making the front page a table of contents of all the topics is very handy. However you decide to organize your wiki, make sure it is well understood by everyone else. In fact, a good first document might be the policy describing the organization of the wiki!

TWiki

There are several open-source wikis available, such as MediaWiki [see Reuven M. Lerner's article on page 62 for more information on MediaWiki] and MoinMoin, each with its own philosophy on markup and layout, but here we concentrate on TWiki. Some of TWiki's benefits are:

The most current stable release at this time is Cairo, or TWiki20040904. It was released, as the name suggests, on September 4, 2004, and it has been proven to be very stable. However, it does lack some of the features of the current beta release, Dakar, that I find to be very useful. The Dakar release we use here is TWikiRelease2005x12x17x7873beta.

Installing TWiki is relatively easy, but still needs work. I hope, as the beta progresses, we will see improvements in ease of installation and upgrading along with clearer documentation.

First, you must create the directory where you want to install TWiki, say /var/www/wiki. Next, untar the TWiki distribution in that directory. Then you must make sure that the user with rights to run CGI scripts (usually apache or www-data), owns all of the files and is able to write to all files:

# install -d -o apache /var/www/wiki # cd /var/www/wiki # tar zxf /path/to/TWikiRelease2005x12x17x7873beta.tgz # cp bin/LocalLib.cfg.txt bin/LocalLib.cfg # vi bin/LocalLib.cfg lib/LocalSite.cfg # chown -R apache * # chmod -R u+w * 

Now copy bin/LocalLib.cfg.txt to bin/LocalLib.cfg, and edit it. You need to edit the $twikiLibPath variable to point to the absolute path of your TWiki lib directory, /var/www/wiki/lib in our case. You also must create lib/LocalSite.cfg to reflect your specific site information. Here is a sample of what might go into LocalSite.cfg:

# This is LocalSite.cfg. It contains all the setups for your local # TWiki site. $cfg{DefaultUrlHost} = "http://www.example.com"; $cfg{ScriptUrlPath} = "/wiki/bin"; $cfg{PubUrlPath} = "/wiki/pub"; $cfg{DataDir} = "/var/www/wiki/data"; $cfg{PubDir} = "/var/www/wiki/pub"; $cfg{TemplateDir} = "/var/www/wiki/templates"; $TWiki::cfg{LocalesDir} = '/var/www/wiki/locale';  

Here is a sample section for your Apache configuration file that allows this wiki to run:

 ScriptAlias /wiki/bin/ "/var/www/wiki/bin/" Alias /wiki "/var/www/localhost/wiki" <Directory "/var/www/wiki/bin"> Options +ExecCGI -Indexes SetHandler cgi-script AllowOverride All Allow from all </Directory> <Directory "/var/www/wiki/pub"> Options FollowSymLinks +Includes AllowOverride None Allow from all </Directory> <Directory "/var/www/wiki/data"> deny from all </Directory> <Directory "/var/www/wiki/lib"> deny from all </Directory> <Directory "/var/www/wiki/templates"> deny from all </Directory>

TWiki comes with a configure script that you run to set up TWiki. This script is used not only on initial install but also when you want to enable plugins later. At this point, you are ready to configure TWiki, so point your browser to your TWiki configure script, http://www.example.com/wiki/bin/configure. You might be particularly interested in the Security section, but we will visit this shortly. Until you have registered your first user, you should leave all settings as they are. If the configure script gives any warnings or errors, you should fix those first and re-run the script. Once you click Next, you are prompted to enter a password. This password is used whenever the configure script is run in the future to help ensure no improper access.

Once you have completed the configuration successfully, it is time to enter the wiki. Point your browser to http://www.example.com/wiki/bin/view, and you are presented with the Main web. In the middle of the page is a link for registration. Register yourself as a user. Be sure to provide a valid e-mail address as the software uses it to validate your account. Once you have verified your user account, you need to add yourself to the TWikiAdminGroup. Return to the Main web and click on the Groups link at the left, and then choose the TWikiAdminGroup. Edit this page, and change the GROUP variable to include your new user name:

 Set GROUP = %MAINWEB%.TiLeggett Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup  

The three blank spaces at the beginning of each of those lines are critical.

These two lines add your user to the TWikiAdminGroup and allow only members of the TWikiAdminGroup to modify the group. We are now ready to enable authentication for our wiki, so go back to http://www.example.com/wiki/bin/configure. Several options provided under the Security section are useful. You should make sure the options {UseClientSessions} and {Sessions}{UseIPMatching} are enabled. Also set the {LoginManager} option to TWiki::Client::TemplateLogin and {PasswordManager} to TWiki::Users::HtPasswdUser. If your server supports it, you should set {HtPasswd}{Encoding} to sha1. Save your changes and return to the wiki. If you are not logged in automatically, there is a link at the top left of the page that allows you to do so.

Now that you have authentication working, you may want to tighten down your wiki so that unauthorized people do not turn your documentation repository into an illicit data repository. TWiki has a pretty sophisticated authorization system that is tiered from the site-wide preferences all the way down to a specific topic. Before locking down the Main web, a few more tasks need to be done. Once only certain users can change the Main web, registering new users will fail. That is because part of the user registration process involves creating a topic for that user under the Main web. Dakar has a user, TWikiRegistrationAgent, that is used to do this. From the Main web, use the Jump box at the top left to jump to the WebPreferences topic. Edit the topic to include the following four lines and save your changes:

Set ALLOWTOPICRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWTOPICCHANGE = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBRENAME = %MAINWEB%.TWikiAdminGroup Set ALLOWWEBCHANGE = %MAINWEB%.TWikiAdminGroup, -->;%MAINWEB%.TWikiRegistrationAgent  

This allows only members of the TWikiAdminGroup to make changes or rename the Main web or update the Main web's preferences. It also allows the TWikiRegistrationAgent user to create new users' home topics when new users register. I have included a patch that you must apply to lib/TWiki/UI/Register.pm as well. The patch follows, but you can also download the patch from the LJ FTP site (see the on-line Resources):

--- lib/TWiki/UI/Register.pm.orig 2006-01-04 01:34:48.968947681 -0600 +++ lib/TWiki/UI/Register.pm 2006-01-04 01:35:48.999652157 -0600 @@ -828,11 +828,12 @@ my $userName = $data->{remoteUser} || $data->{WikiName}; my $user = $session->{users}->findUser( $userName ); + my $agent = $session->{users}->findUser( $twikiRegistrationAgent ); $text = $session->expandVariablesOnTopicCreation( $text, $user ); $meta->put( 'TOPICPARENT', { 'name' => $TWiki::cfg{UsersTopicName}} ); - $session->{store}->saveTopic($user, $data->{webName}, + $session->{store}->saveTopic($agent, $data->{webName}, $data->{WikiName}, $text, $meta ); return $log; }  

Otherwise, new users' home directories will fail to be created and new user registration will fail. Once you have verified that the Main web is locked down, you should do the same for the TWiki and Sandbox webs.

When you are done configuring TWiki, you should secure the files' permissions:

# find /var/www/wiki/ -type d -exec chmod 0755 {} ';' # find /var/www/wiki/ -type f -exec chmod 0400 {} ';' # find /var/www/wiki/pub/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/data/ -type f -exec chmod 0600 {} ';' # find /var/www/wiki/lib/LocalSite.cfg -exec chmod 0600 {} ';' # find /var/www/wiki/bin/ -type f -exec chmod 0700 {} ';' # chown -R apache /var/www/wiki/* As I mentioned before, TWiki has a plugin system that you can use. Many 
							plugins are available from the TWiki Web site. Be 
							sure the plugins you choose have been updated for 
							Dakar before you use them.

Keeping Your Users in the Know

One important aspect of system administration that is sometimes overlooked is keeping users informed. Most users like to know when there is new functionality available or when resources are down or not available. Not only does it make users happier to be kept informed, but it also can make your life easier as well. The last thing you want to do when the central file server is down is reply to users' questions about why they cannot get to their files. If you have trained your users to look at a central location for status of the infrastructure first, all you have to do after notification of a problem is post to this central place that there is a problem. Mailing lists also are good for this, but what if the mail server is down? Some people, for instance your boss or VP of the company, might like to know what the status is of things as they happen. These updates might not be suitable to send out to everyone daily via e-mail. You could create yet another mailing list for these notifications, but you also might consider a blog.

If you are not familiar with a blog, let us refer back to Wikipedia: "a blog is a Web site in which journal entries are posted on a regular basis and displayed in reverse chronological order."

The notion of a blog has been around for centuries in the form of diaries, but blogs recently have had an explosion on the Internet. Many times a blog is started as someone's personal journal or as a way to report news, but blogs can be extremely useful for the sysadmin.

Blogs can help a sysadmin give users an up-to-the-minute status of what they are doing and what the state of the infrastructure is. If you faithfully update your blog, you easily can look back on what you have accomplished so you can make your case for that raise you have been hoping for. It also will help you keep track of what your coworkers are doing. And, with many blog software packages providing RSS feeds, users can subscribe to the blog and be notified when there are new posts.

... ... ...

Wrapping Up

I hope that after this whirlwind tour of wikis and blogs you have come to see how they can be beneficial to help your shop run a smoother ship and provide your users with all the information they might want. Just as there are many different sails to keep your ship sailing, there are many different wiki and blog software packages out there. The right package for you is the one that keeps your users happy and you productive.

Resources for this article: www.linuxjournal.com/article/8832.

Ti Leggett ([email protected]) is a full-time system administrator. When he's not working, he might be found playing his Gibson B-25 or doing some home improvements or wood working.

Project details for ReciPants

freshmeat.net

ReciPants is a Web-based recipe manager that supports Postgres, MySQL, and Oracle databases. It features searching, categories, exporting, scaling, emailing recipes, password reminders, secure user cookies, internationalization, and fully customizable templated output.

Which Open Source Wiki Works For You

Nov 4, 2004 | ONLamp.com

Kwiki

The Kwiki motto is a "A Quickie Wiki that's not Tricky." Installing it is pretty straightforward for a site you admin: just install the Perl package (from CPAN or elsewhere), and then type kwiki-install in a CGI-served directory to create an instance. Installing Kwiki on a server you are not an admin of is more complicated but doable.

I found the Kwiki markup not powerful. Some things are impossible with it, such as hyperlinking an arbitrary piece of text to an email address (mail fooish). I also could not find how to link a Wiki page with a text link different from the Wiki page name (like this link to LinkedWikiWikiPage). There is also no support for attachments, HTML markup as an alternative to the Wiki markup, etc. It is disappointing.

Kwiki can use either RCS or Subversion for version control. (Those who wish to use Subversion should check out the corrected Kwiki version as the CPAN Kwiki does not work with the up-to-date Subversion.) Kwiki is easily customizable and has several Kwiki enhancements available. Generally, however, they are less powerful than TWiki's.

All in all, Kwiki is easy to install and customize, but its formatting rules are lacking.

... ... ...

UseModWiki

UseModWiki is a Wiki engine written in Perl. Anecdotally, Wikipedia used this first before re-implementing their current engine. Other sites also use UseModWiki.

UseModWiki is very simple to set up and upgrade. It has a rich syntax, and allows for arbitrary characters in page names. It also supports using some HTML tags instead of the WikiWiki markup. It has other nice features, including search, a list of recent changes, and page history.

For simple Wikis, UseModWiki is a very good choice. I recommend choosing between it and PmWiki based on the feature list of both Wikis.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

New findings

Format translators (open suse)

List of wiki software - Wikipedia, the free encyclopedia

Random Findings

deplate 0.6a by tlink

Oct 29th 2004

About: deplate is a tool for converting documents written in an unobtrusive, wiki-like markup to LaTeX, DocBook, HTML, or "HTML slides". It supports embedded LaTeX code, footnotes, citations, bibliographies, automatic generation of an index, etc. In contrast to many wiki engines, it is made for "offline" use as a document preparation tool. In this respect it is similar to tools like aft or txt2tags. It aims to be modular and easily extensible. It is the accompanying converter for the Vim viki plugin.

Changes: Two bugs that prevented it from running on an OS with case-sensitive file names or with the Win32 exe have been fixed. There are minor amendments.

Faq-O-Matic

The Faq-O-Matic is a CGI-based system that automates the process of maintaining a FAQ (or Frequently Asked Questions list). It allows visitors to your FAQ to take part in keeping it up-to-date. A permission system also makes it useful as a help-desk application, bug-tracking database, or documentation system. Jon wrote an article about the FAQ-O-Matic that appeared in the USENIX ;login: newsletter: http://www.usenix.org/publications/login/1998-6/faq.html.

This documentation itself is, naturally, maintained with Faq-O-Matic. Hence the weird title. If you see anything that can use updating, please do fix it! If you just want to play around, check out the

Playground.

The Users' Guide tells new users what a FAQ-O-Matic is, how to read it, and how to contribute to it.

The Administrators' Guide tells FAQ administrators how to download, install and maintain a FAQ-O-Matic. If you want to start your own FAQ-O-Matic, look here.

The Playground is a place where anyone can experiment with the FAQ-O-Matic by creating their own answers.

The List Of Faq-O-Matics is a list of websites that use FAQ-O-Matic.

Here are Postcards I have received from people who use FAQ-O-Matic.

This is the Faq-O-Matic Sourceforge page: http://sourceforge.net/projects/faqomatic
lost+found

Jon Howell is a graduate student working with David Kotz at Dartmouth College. His research is on single-system-image operating environments that span administrative domains. He also enjoys robotics, web glue, and kernel tweaking.

What is the best way to maintain a Frequently Asked Questions list? Traditional FAQs are maintained by hand and often become outdated as the maintainer loses interest in the task. Mailing list archives can be helpful, but are often too disorganized. When I found myself answering the same FAQs about Linux for PowerPC and MkLinux last year, I faced this very problem. I am far too lazy to commit to maintaining a FAQ, but the mailing list archives were not significantly reducing the load of redundant questions.

Solution Design

The FAQ-O-Matic is a Web-editable FAQ designed to offer a middle ground. Because the general public can edit it, it is less likely to become neglected and stale like a manually maintained FAQ. Because changes are submitted where the FAQ is read, one can be rather lazy and still contribute to the FAQ. No one person has the responsibility of maintaining the entire document.

Because a FAQ-O-Matic is topically and hierarchically organized, browsing solutions is easier than it is in mailing list archives. Queries on mailing list archives can return matches for old, outdated information and miss newer answers. The topical organization of a FAQ-O-Matic helps avoid this problem as well.

A search function makes FAQ-O-Matic as accessible as a mailing list archive. Another function lists recently changed items, so users can check back for changes if they did not find a satisfying answer the first time they looked. There is a command to show entire categories in one HTML file, to facilitate printing or export of the FAQ.

How It Works in Practice

I launched the first FAQ-O-Matic by seeding it with about 60 or 70 answers gleaned from recent list postings. Although this opposed my laziness philosophy, I knew that I would not be responsible for keeping the answers up to date. Then I began advertising it by answering questions with URLs to the FAQ-O-Matic.

One problem with the initial implementation was that answers were identified by their location in the topic hierarchy. So if you sent out a URL to a FAQ-O-Matic answer and the database was subsequently reorganized, that URL would go sour.

I initially thought allowing people to submit questions without answers would help define the structure of the FAQ by reserving spaces for answers when they became available. Instead, people who were too lazy to search would post poorly considered questions in inappropriate categories.

The submission page asked users to leave an email address with their comment, but people often forgot or inserted text between previous text and its attribution. Furthermore, although the server uses RCS to protect against wholesale vandalism, there was no way to trace subtle, intentional corruption of the database.

The FAQ-O-Matic allowed the entry of HTML tags, so users could supply links and formatting information for their answers. However, other than links, HTML was rarely used to good effect. Instead, it often made for inconsistent appearance as people appended to existing answers and as HTML tags fought with the formatting generated by the CGI script. Furthermore, code segments pasted into the FAQ-O-Matic would mysteriously lose < and & symbols.

Finally, I found that I had to put in a certain amount of effort moving answers around to keep them organized as new answers showed up. This was compounded by the difficulty of performing this sort of administration on the first implementation of FAQ-O-Matic.

Version 2

Over the summer, I rewrote the FAQ-O-Matic to address these problems. First, each answer is now assigned a permanent ID number. This solves the sour URL problem and also provides a facility for "see also" links inside the FAQ.

I posted a policy disallowing questions without answers, which trivially solved the second problem.

The new version has an authentication scheme that verifies email addresses by sending a secret validation code to the given address. Thus each submission is attributed to a real email address, and intentional corruption, once noticed, can be traced and rolled back.

FAQ-O-Matic 2 no longer allows HTML tags; they are displayed as entities (&lt;). This prevents code from becoming corrupted and enforces a uniform (if bland) appearance. Links are supported heuristically by detecting things that look like links (http://...) and generating HTML links on the fly. Internal links are supported with a special URL that begins "faqomatic:"

Version 2 also has support for reorganizing answers and categories from the Web interface. This facility might allow a Web user to moderate a section of the FAQ and care for its organization. Moderators can request that they receive mail whenever any answer in their area is changed, minimizing the effort associated with the moderation task.

The 1998 LinuxPPC CD was announced in January, prompting an estimated 4,500 people to visit the FAQ-O-Matic on <www.dartmouth.edu> in one day. Because every request required service by a Perl CGI, the memory pressure overloaded the server, and the FAQ-O-Matic had to be throttled. In response to that event, version 2.6 adds a server-side HTML cache, so that people who are only reading the FAQ receive HTML directly from the Web server, without the cost of invoking the CGI.

Other Uses

Because FAQ-O-Matic has an authentication scheme, it made sense to give it flexible authorization as well. The FAQ can be configured to be very open, not even requiring mail-back email secrets, to encourage the shy, lazy, or anonymous contributor at the expense of accuracy of attributions.

Alternatively, it can be set to allow only assigned moderators to modify the FAQ. In this arrangement, it is suitable for use as a help desk database: only help desk operators can modify the database, but it is available for all to read.

Numbers

The Linux on PowerPC FAQ-O-Matic has been available for 15 months. In that time, about 75,000 different people (IP addresses) have seen it, and it has received 1,500 submissions. On average, visitors access it about ten times. A few dozen people claim to be running their own FAQ-O-Matics, some for internal projects, others for Internet-accessible sites.

Conclusion

The FAQ-O-Matic has turned out to be a successful system for documenting the Linux on PowerPC projects. It is more organized than a mailing list archive, but avoids going stale as traditional FAQs often do. It allows a division of labor in maintaining both answers and the overall organization of the FAQ. And it has access control features that make it suitable for other applications. To try it out, visit the FAQ-O-Matic at <http://www.dartmouth.edu/cgi-bin/cgiwrap/jonh/faq.pl>.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: July 03, 2020