Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

C language

News Algorithms and Data Structures Best Books Recommended Links Bit Tricks Language Articles References
FAQs make  C++ Assembler C critique Structured programming Anti-OO Unix system calls
Coroutines in C Classic Unix tools Orthodox File Managers Debugging  Ctags GCC Microsoft Visual C Lint
Typical Errors HTMlLsers Prettyprinting Programming Style History Random Findings Humor Etc

The Tao gave birth to machine language.
Machine language gave birth to the assembler.
The assembler gave birth to the compiler.
Now their are ten thousand languages.
Each language has its purpose, however humble.
Each language expresses the Yin and Yang of software.
Each language has its place within the Tao.
But do not program in COBOL if you can avoid it.

The Tao of Programming

All through my life, I've always used the programming language
that blended best with the debugging system and operating system that I'm using. If I had a better debugger for language X, and if X went well with the operating system, I would be using that.

Donald Knuth

Note: This material on this somewhat intersect with C++ page as I consider C++ to be mainly "a better C" and I am deeply skeptical about OO approach. Some materials that are missing in this page might probably be found  on the C++ page

C is often referred to as a ``high-level assembly language.''  That means that it is not optimal as the first programming language.  The ability to perform low-level operations that are needed for systems programming is actually the most distinctive feature of the language. Otherwise C looks like a subset of PL/1.  I believe that the language low level features and the way pointers were integrated in Algol-style language was a very important invention for its time.  Essentially C became the first widespread machine independent system programming language.  

As Alex Stepanov aptly noted in his Dr. Dobb's Journal Interview

Let's consider now why C is a great language. It is commonly believed that C is a hack which was successful because Unix was written in it. I disagree. Over a long period of time computer architectures evolved, not because of some clever people figuring how to evolve architectures---as a matter of fact, clever people were pushing tagged architectures during that period of time---but because of the demands of different programmers to solve real problems. Computers that were able to deal just with numbers evolved into computers with byte-addressable memory, flat address spaces, and pointers. This was a natural evolution reflecting the growing set of problems that people were solving. C, reflecting the genius of Dennis Ritchie, provided a minimal model of the computer that had evolved over 30 years. C was not a quick hack. As computers evolved to handle all kinds of problems, C, being the minimal model of such a computer, became a very powerful language to solve all kinds of problems in different domains very effectively.

This is the secret of C's portability: it is the best representation of an abstract computer that we have. Of course, the abstraction is done over the set of real computers, not some imaginary computational devices. Moreover, people could understand the machine model behind C. It is much easier for an average engineer to understand the machine model behind C than the machine model behind Ada or even Scheme. C succeeded because it was doing the right thing, not because of AT&T promoting it or Unix being written with it.

Right now it got it's second life as a lower level language for "dual language" programming (in combination with scripting languages). Especially easy to leant is TCL+C dual language programming techniques.  I strongly advice learn TCL to any serious C programmer. Otherwise you will deprive yourself of a lot of important concepts and method of program development and probably will never be as productive as you can be.

C is a simple and elegant language, that introduced a lot of new ideas into language disign.  As Alex Stepanov in his Dr. Dobb's Journal Interview aptly put it: 

Let's consider now why C is a great language. It is commonly believed that C is a hack which was successful because Unix was written in it. I disagree. Over a long period of time computer architectures evolved, not because of some clever people figuring how to evolve architectures---as a matter of fact, clever people were pushing tagged architectures during that period of time---but because of the demands of different programmers to solve real problems. Computers that were able to deal just with numbers evolved into computers with byte-addressable memory, flat address spaces, and pointers. This was a natural evolution reflecting the growing set of problems that people were solving. C, reflecting the genius of Dennis Ritchie, provided a minimal model of the computer that had evolved over 30 years. C was not a quick hack. As computers evolved to handle all kinds of problems, C, being the minimal model of such a computer, became a very powerful language to solve all kinds of problems in different domains very effectively. This is the secret of C's portability: it is the best representation of an abstract computer that we have. Of course, the abstraction is done over the set of real computers, not some imaginary computational devices. Moreover, people could understand the machine model behind C. It is much easier for an average engineer to understand the machine model behind C than the machine model behind Ada or even Scheme. C succeeded because it was doing the right thing, not because of AT&T promoting it or Unix being written with it.

While borrowing program structure and syntactic flavour from PL/1 and pointer concept from BCPL,  it really elegantly integrated the concept of pointers into PL/1-style framework, provided practical set of high-level control structures and introduced shortcuts for increment/decrement style operations. As Donald Knuth remarked:

The way C handles pointers, for example, was a brilliant innovation; it solved a lot of problems that we had before in data structuring and made the programs look good afterwards. C isn't the perfect language, no language is, but I think it has a lot of virtues, and you can avoid the parts you don't like. I do like C as a language, especially because it blends in with the operating system (if you're using UNIX, for example).

All through my life, I've always used the programming language that blended best with the debugging system and operating system that I'm using. If I had a better debugger for language X, and if X went well with the operating system, I would be using that.

And believe me despite C++ existence (and partially due to it ;-) C will be around for a long time.  As Dennis Ritchie aptly put in one of his interviews:

LinuxWorld.com: C and Unix have exhibited remarkable stability, popularity, and longevity in the past three decades. How do you explain that unusual phenomenon?

Dennis Ritchie: Somehow, both hit some sweet spots. The longevity is a bit remarkable -- I began to observe a while ago that both have been around, in not astonishingly changed form, for well more half the lifetime of commercial computers. This must have to do with finding the right point of abstraction of computer hardware for implementation of the applications.

The basic Unix idea -- a hierarchical file system with simple operations on it (create/open/read/write/delete with I/O operations based on just descriptor/buffer/count) -- wasn't new even in 1970, but has proved to be amazingly adaptable in many ways. Likewise, C managed to escape its original close ties with Unix as a useful tool for writing applications in different environments. Even more than Unix, it is a pragmatic tool that seems to have flown at the right height.

Both Unix and C gained from accidents of history. We picked the very popular PDP-11 during the 1970s, then the VAX during the early 1980s. [See Resources for links to both.] And AT&T and Bell Labs maintained policies about software distribution that were, in retrospect, pretty liberal. It wasn't today's notion of open software by any means, but it was close enough to help get both the language and the operating system accepted in many places, including universities, the government, and in growing companies.

LinuxWorld.com: Five or ten years from now, will C still be as popular and indispensable as it is today, especially in system programming, networking, and embedded systems, or will newer programming languages take its place?

Dennis Ritchie: I really don't know the answer to this, except to observe that software is much harder to change en masse than hardware. C++ and Java, say, are presumably growing faster than plain C, but I bet C will still be around. For infrastructure technology, C will be hard to displace. The same could be said, of course, of other languages (Pascal versions, Ada for example). But the ecological niches you mention are well occupied.

What is changing is that higher-level languages are becoming much more important as the number of computer-involved people increases. Things that began as neat but small tools, like Perl or Python, say, are suddenly more central in the whole scheme of things. The kind of programming that C provides will probably remain similar absolutely or slowly decline in usage, but relatively, JavaScript or its variants, or XML, will continue to become more central. For that matter, it may be that Visual Basic is the most heavily used language around the world. I'm not picking a winner here, but higher-level ways of instructing machines will continue to occupy more of the center of the stage.

But C is notoriously difficult to learn as a first language. Other things equal, the best way to learn C is to learn assembly language first or to learn two of them in parallel. This way you will probably find C constructs and especially pointer arithmetic quite  natural.

If you have never programmed in assembly language, you may be frustrated by syntax and convoluted semantic of pointer arithmetic, treatment array names as pointers and so on. The main problem here is that the language was designed for people who already are able to program in assembler. This language was designed for writer of an operating system (Unix) and that is noticeable. In any case, you should understand that C was designed by accomplished system programmers for accomplished programmers and do not expect to much help as for finding errors neither from the compiler not from run time system (which is non-existent in pure C, and is not very helpful in C++). Both space and time efficiency and the ability to be close to  machine language constructions was necessary on 24K PDP 11 were it was first implemented.  See A development of the C language by Dennis Ritchie for more historical information.

You will be much better off if you already took course in some classic programming language like Basic, Fortran or Pascal(Turbo Pascal is just great as the first language with Modula-2 as logical continuation). This way you will be able to understand the language by comparing C-way of doing thing with Pascal-way of doing things that you already know.

The problem of learning C at high schools and universities is often complicated by teachers ;-). Many teachers forgot the problems they face when they study the language themselves and try to feed students as much material as possible in the very first course. IMHO the attempts to teach both C and C++ in one semester course are really pretty close to a crime. No it's worse than a crime -- this is a blunder ;-(. In any case after such course usually more than 50% of students hate programming in general and C in particular... Again I would like to stress that Pascal (in its Turbo Pascal incarnation) is a much better first language, but if you are unfortunate enough have C as your first language try to slow down and spend first seven weeks without pointers and structures -- the language will be much better understood if you do not rush to complex constructs and master a reasonable subset before jumping into complex stuff. Bad books can also complicate things considerable. Please read [alt.comp.lang.learn.c-c++] - FAQ for some useful recommendations on how to avoid typical pitfalls and problems in C.

Due to the presence of the preprocessor, diagnostic of lexical and syntax errors of C compilers is exceptionally bad and to avoid frustration you better write with minimum number of mistakes. That means having a good textbook and consulting it often. Moreover than means that you need to check manually program for typical mistakes (like missing semicolons, "=" instead of "==" in comparison, etc.).  You can actually save a lot of time this way.

For me list of "gotchas" -- errors that I already made and that took me considerable time to discover was  really helpful. You can use some of the lists available on the WEB as a starting point, but you will be much better off by creating your own from scratch. For example due to my previous many years experience with PL/1 I still sometimes use "=" instead of "==" in if statements and loops. This is an annoying error. See The Top 10 Ways to get screwed by the C programming language.

Also annoying and difficult to uncover were cases when I forgot to place & operator before the name of the variable and passed a value instead of address. C method of representing strings as array of characters that ends with null was pretty interesting in 70th, but lost much of its appeal on computers with several gigabyte of memory. The necessity of having null at the end of the string leads to subtle errors. That's why sometimes you will see recommendations like the one in the SDM C Style Guide:

4.7 Standard: Explicit +1 in String Length Declaration for \n

Character arrays used as strings, i.e., to hold ASCII text and terminated by a null character) should have a defined length that explicitly includes the "+ 1" character for the null string terminator.

   #define NAME_LEN 20 + 1
   char name[NAME_LEN];

Style is important too. One needs (no, actually one should) to use indent or other pretty-printer -- they are really important as they really simplify catching of errors by creating a pattern of indentation that is distinctive from what you might expect. At the same time one should not go overboard with style by enforcing upon oneself things that make no sense at all. Although I like the idea of using high level control construct whenever possible, I really hate structured programming pundits that teach to avoid GOTO, Breaks, continue, global variables no matter what -- really religious attitude. This "structured programming fundamentalists" are not as bad as (now mostly extinct) verification proponents a la Professor E.W. Dijkstra (who BTW originated "considered harmful" cliche in his influential Go To Statement Considered Harmful paper, published in Communications of the ACM, Vol. 11, No. 3, March 1968, pp. 147-148.), but still they try to make programming more difficult instead of trying making it easier. As B. Kernighan noted in his famous Why Pascal is Not My Favorite Programming Language:

There is no 'break' statement for exiting loops. This is consistent with the one entry-one exit philosophy espoused by proponents of structured programming, but it does lead to nasty circumlocutions or duplicated code, particularly when coupled with the inability to control the order in which logical expressions are evaluated. Consider this common situation, expressed in C or Ratfor:

     while (getnext(...)) {
             if (something)
                     break
             /* rest of loop */
     }

With no 'break' statement, the first attempt in Pascal is

     done := false;
     while (not done) and (getnext(...)) do
             if something then
                     done := true
             else begin
                     rest of loop
             end

A scientific ground of this attempts to avoid certain construct is completely non-existent -- so, as Donald Knuth pointed out, feel free to use them with no guilt feeling if you try to implement a higher level control construct that is simply not availed in a given language. For example I like many recommendation SDM C Style Guide and just ignore half-dozen their "structured programming fundamentalism"-based recommendation like avoiding global variables, continue statements and GOTOs (see Donald Knuth's famous article about this issue for more details), but your mileage may vary.

Paradoxically, execution time errors are easier to find as most implementations have pretty decent debuggers with step by step execution. Borland is probably the best, but I was really impressed by a debugger that is built in Visual Studio 5.0. May be it was written by Borland people who were bought by Microsoft just before Borland itself was bought by Inprise ;-)

But deficiency of C are logical continuation of its strong points. And there are a lot of strong point in C -- its popularity proves that it is one of the best system programming languages around. People are flexible enough to adapt to the language and top programmers can produce up to a couple of thousand lines of code over a weekend. GCC is the main Linux compiler and in order to use it one needs to learn C. So C is the key to open source software. One deficiency of C that I hate is the fact that it does not support coroutines but there are libraries for GCC that can help.

It is silly to consider C to be a weaker programming language than C++. C++ is a decent language, but it is to certain extent an overkill and is a much more complex programming language than C. It also magnify problems that exist in C making debugging even more difficult. Contrary to OO advocates C++ in not always better than C.

Actually you can in many cases one can do much better by using tandem of TCL + C. TCL has a very simple structure. Each line starts with a command, such as dotask and a number of arguments. Each command is implemented as a C function. This function is responsible for handling all the arguments. See my TCL page. C programmers can also benefit from learning Expect. It is the greatest testing tool in existance. See also DejaGnu

Some free Windows C compilers:

For Linux in addition of gcc there are several quality offerings too:

And the last but not least. Sometimes you will feel a block - you can not do it any more no matter what. Here is several possible ways to overcome this condition:

  1. Do some intense physical activity for several hours like running, diving, bicycling, fast swimming, etc. Then take a shower and try it again... This is usually very helpful.
  2. Switch to another projects for at least a couple of days...
  3. Go off on a tangent - sleep, read something that isn't specific to the source of the frustration, but still connected with programming, for example:
    1. Anything by Donald Knuth
    2. Back issues (the older the better ;-) of Byte, Dr. Dobbs', etc.
    3. Go to the library and browse programming books at random for several hours (This one is for those who like such an activity, like me ;-)

Good luck !

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2004 2003 2002 2001 2000

[Sep 29, 2020] Learned C# first and that was a huge mistake. Programming got all exciting when I learned C

Sep 29, 2020 | www.youtube.com


Ai
, 2 years ago

Learned C# first and that was a huge mistake. Programming got all exciting when I learned C

[Nov 08, 2015] The Anti-Java Professor and the Jobless Programmers

Nick Geoghegan

James Maguire's article raises some interesting questions as to why teaching Java to first year CS / IT students is a bad idea. The article mentions both Ada and Pascal – neither of which really "took off" outside of the States, with the former being used mainly by contractors of the US Dept. of Defense.

This is my own, personal, extension to the article – which I agree with – and why first year students should be taught C in first year. I'm biased though, I learned C as my first language and extensively use C or C++ in projects.

Java is a very high level language that has interesting features that make it easier for programmers. The two main points, that I like about Java, are libraries (although libraries exist for C / C++ ) and memory management.

Libraries

Libraries are fantastic. They offer an API and abstract a metric fuck tonne of work that a programmer doesn't care about. I don't care how the library works inside, just that I have a way of putting in input and getting expected output (see my post on abstraction). I've extensively used libraries, even this week, for audio codec decoding. Libraries mean not reinventing the wheel and reusing code (something students are discouraged from doing, as it's plagiarism, yet in the real world you are rewarded). Again, starting with C means that you appreciate the libraries more.

Memory Management

Managing your programs memory manually is a pain in the hole. We all know this after spending countless hours finding memory leaks in our programs. Java's inbuilt memory management tool is great – it saves me from having to do it. However, if I had have learned Java first, I would assume (for a short amount of time) that all languages managed memory for you or that all languages were shite compared to Java because they don't manage memory for you. Going from a "lesser" language like C to Java makes you appreciate the memory manager

What's so great about C?

In the context of a first language to teach students, C is perfect. C is

Java is a complex language that will spoil a first year student. However, as noted, CS / IT courses need to keep student retention rates high. As an example, my first year class was about 60 people, final year was 8. There are ways to keep students, possibly with other, easier, languages in the second semester of first year – so that students don't hate the subject when choosing the next years subject post exams.

Conversely, I could say that you should teach Java in first year and expand on more difficult languages like C or assembler (which should be taught side by side, in my mind) later down the line – keeping retention high in the initial years, and drilling down with each successive semester to more systems level programming.

There's a time and place for Java, which I believe is third year or final year. This will keep Java fresh in the students mind while they are going job hunting after leaving the bosom of academia. This will give them a good head start, as most companies are Java houses in Ireland.

[Nov 08, 2015] Abstraction

nickgeoghegan.net

Filed in Programming No Comments

A few things can confuse programming students, or new people to programming. One of these is abstraction.

Wikipedia says:

In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.

That might be a bit too wordy for some people, and not at all clear. Here's my analogy of abstraction.

Abstraction is like a car

A car has a few features that makes it unique.

If someone can drive a Manual transmission car, they can drive any Manual transmission car. Automatic drivers, sadly, cannot drive a Manual transmission drivers without "relearing" the car. That is an aside, we'll assume that all cars are Manual transmission cars – as is the case in Ireland for most cars.

Since I can drive my car, which is a Mitsubishi Pajero, that means that I can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.

All I need to know, in order to drive a car – any car – is how to use the breaks, accelerator, steering wheel, clutch and transmission. Since I already know this in my car, I can abstract away your car and it's controls.

I do not need to know the inner workings of your car in order to drive it, just the controls. I don't need to know how exactly the breaks work in your car, only that they work. I don't need to know, that your car has a turbo charger, only that when I push the accelerator, the car moves. I also don't need to know the exact revs that I should gear up or gear down (although that would be better on the engine!)

Virtually all controls are the same. Standardization means that the clutch, break and accelerator are all in the same place, regardless of the car. This means that I do not need to relearn how a car works. To me, a car is just a car, and is interchangeable with any other car.

Abstraction means not caring

As a programmer, or someone using a third party API (for example), abstraction means not caring how the inner workings of some function works – Linked list data structure, variable names inside the function, the sorting algorithm used, etc – just that I have a standard (preferable unchanging) interface to do whatever I need to do.

Abstraction can be taught of as a black box. For input, you get output. That shouldn't be the case, but often is. We need abstraction so that, as a programmer, we can concentrate on other aspects of the program – this is the corner-stone for large scale, multi developer, software projects.

[Jan 07, 2013] C Beats Java As Number One Language According To TIOBE Index

"Every January it is traditional to compare the state of the languages as indicated by the TIOBE index. So what's up and what's down this year? There have been headlines that C# is the language of the year, but this is based on a new language index. What the TIOBE index shows is that Java is no longer number one as it has been beaten by C - yes C not C++ or even Objective C."


UnknownSoldier :

> While I agree that C is a bad language, it has no competition in low-level coding.
Mostly agree. Although I prefer turning all the crap in C++ off to get better compiler support.

> Although C++ could take its role and it even fixes many of its shortcomings (e.g. namespaces)
Uh, you don't remembered "Embedded C++" back in the late 90's early 00's ?

If you think namespaces are part of the problems you really don't understand the complexity of C++ at _run_time_ ...

Namely:

* Exception Handling
* RTTI
* dynamic memory allocation and the crappy way new/delete handle out of memory
* dynamic casts
* no _standard_ way to specify order of global constructors/destructors

Embedded systems NEED to be deterministic, otherwise you are just gambling with a time-bomb.

http://en.wikipedia.org/wiki/Embedded_C%2B%2B [wikipedia.org]

marcosdumay

After you accept the constraints of an embbebed environment and low level access, C is not a bad language anymore. Any language useable on that kind of environment is at least as bad as C.

disambiguated

Maybe you're like me. I've been using C for so long that I think I've lost objectivity. C is the first language I learned (other than line numbered basic.) In my mind, C is the language all other languages are judged against. But if there's any truth to this (when did the TIOBE index become the official word?) it makes me wonder if it's not C itself that is making a comeback, but good old fashioned procedural style programming. All these fancy new languages with their polymorphism, encapsulation, templates and functional features have lost their sparkle. Programmers are rediscovering that there isn't anything you can't do (even runtime polymorphism) with just functions, structs, arrays and pointers. It can be easier to understand, and although it may be more typing, it has the virtue that you know exactly what the compiler is going to do with it.

Anonymous Coward

Seriously, for the last fucking time, can we stop posting on Slashdot random shit picked up from TIOBE? The TIOBE index is so completely and utterly full of fail that I can't believe people are STILL clinging onto it as evidence of anything whatsoever.

It shouldn't be traditional to do anything with TIOBE, except perhaps laugh at it or set it on fire.

So once last time, one final fucking time I'll try and explain to the 'tards who think it has any merit whatsoever why it absolutely does not.

We start here, with the TIOBE index definition, the horses mouth explanation of how they cludge together this table of bollocks they call and "index":

http://www.tiobe.com/index.php/content/paperinfo/tpci/tpci_definition.htm [tiobe.com]

First, there is their definition of programming language. They require two criteria, these are:

1) That the language have an entry on Wikipedia

2) That the language be Turing complete

This means that if I go and delete the Wikipedia entry on C, right this moment, it is no longer a programming language, and hence no longer beating anything. Apparently.

The next step, is to scroll past the big list of languages, to the ratings section, where we see that they state they take the top 9 sites on Alexa that have a search option, and they execute the search:

+" programming"

Then weight the results as follows:

Google: 30%
Blogger: 30%
Wikipedia: 15%
YouTube: 9%
Baidu: 6%
Yahoo!: 3%
Bing: 3%
Amazon: 3%

The first problem here is with search engines like Google, I run this query against C++ and note the following:

"About 21,500,000 results"

In other words, Google's figure is hardly anything like a reasonable estimate because a) Most these results are fucking bollocks, and b) The number is at best a ballpark - this accounts for 30% of the weighting.

The next problem is that Blogger, Wikipedia, and YouTube account for 54% of the weighting. These are all sites that have user generated content, as such you could literally, right now, pick one of the lowest languages on the list, and go create a bunch of fake accounts, talking about it, and turn it into the fastest growing language of the moment quite trivially.

To cite an example, I just ran their query on English Wikipedia for the PILOT programming language and got one result. A few fake or modified Wikipedia entries later and tada, suddenly PILOT has grown massively in popularity.

The next point is the following:

"Possible false positives for a query are already filtered out in the definition of "hits(PL,SE)". This is done by using a manually determined confidence factor per query."

In other words yes, they apply an utterly arbitrary decision to each language about what does and doesn't count. Or to put it simply, they apply a completely arbitrary factor in which you can have no confidence of being of any actual worth. I say this because further down they have a list of terms they filter out manually, they have a list of the confidence factors they use, and it takes little more than a second to realise massive gaps and failings in these confidence factors.

For example, they have 100% confidence in the language "Scheme" with the exceptions "tv", and "channel" - I mean really? the word Scheme wouldn't possibly used for anything else? Seriously?

So can we finally put to bed the idea that TIOBE tells us anything of any value whatsoever? As I've pointed out before a far better methodology would at least taken into account important programming sites like Stack Overflow, but ideally you'd simply refer to job advert listings on job sites across the globe - these will tell you far more about what languages are sought after, what languages are being used, and what languages are growing in popularity than any of this shit.

Finally I do recall last year stumbling across a competitor to TIOBE that was at least slightly better but still not ap

lorinc

Java will come back to number 1 in a few years thanks to Android...

EmperorOfCanada

Quite a few people are using the NDK and programming in C++ much to the chagrin of Google. So technically there might be 10-100 lines of Java loading 20,000 lines of C or C++. A great place to get started is: http://www.raywenderlich.com/11283/cocos2d-x-for-ios-and-android-getting-started [raywenderlich.com]

Here they have the most popular iOS game development library ported for programming on android in C++.

mapkinase:

TIOBE programming community index is a measure of popularity of programming languages, calculated from number of search engine results for queries containing the name of the language. [1] The index covers searches in Google, Google Blogs, MSN, Yahoo!, Wikipedia and YouTube.

localman57:

This may be a leading edge indicator. C is sufficently simple that after your first few months you seldom need to consult documentation. I've got nearly 20 years experience, and I seldom or never have to google how to achieve something in C. Algorithms, maybe, but not C syntax. As opposed to very heavy library based languages, such as C# .Net, where I'm constantly googling, because I typically assume there's already a library that does "that" for me, whatever "that" happens to be.

Tridus:

[The other language popularity index] Is called PYPL (PopularitY of Programming Languages), and it ranked C# as #1 and C down in #5 based on a different methodology. Honestly, they both sound pretty silly to me.

https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language [google.com]


EmperorOfCanada

The bulk of my recent programming has been in Objective C but once I leave API calls my code quickly becomes pretty classic C with elements of C++. Yes I love the simplicity of a foreach type structure where it is brain dead to iterate through some set/hash/array of objects with little or no thought about bounds but once I start to really hammer the data hard I often find my code "degenerating" into c. Instead of a class I will create a structure. Instead of vectors I use arrays. I find the debugging far simpler and the attitude to what can be done changes. In fairly raw C I start having thoughts like: I'll mathematically process 500,000 structures every time someone moves their mouse and then I literally giggle when it not only works but works smoothly. What you largely have in C is if the machine is theoretically able to do it then you can program it. Good mathematics can often optimize things significantly but sometimes you just have brute manipulations that need to be fast.

But on a whole other level my claim with most higher level languages ranging from PHP to .net to Java is that they often make the first 90% of a large project go so very quickly. You seem to jump from prototype to 90% in a flash; but then you hit some roadblocks. The garbage collection is kicking in during animations causing stuttering and the library you are using won't let you entirely stop garbage collection. Or memory isn't being freed quickly enough resulting in the requirement that all the users' machines be upgraded to 16Gb. Then that remaining 10% ends up taking twice as long as the first 90%. Whereas I find with C (or C++) you start slow and end slow but the first 90% actually takes 90% of the final time.

But where C is a project killer is the whole weakest link in the chain thing. If you have a large project with many programmers as is typically found in a large business system working on many different modules that basically work on the same data set that a safer language like Java is far far better. I am pretty sure that if the business programmers working on projects that I have seen were to have used C instead of Java that those server systems would crash more than once a minute. You can still program pretty badly in Java but a decent programmer shouldn't blow the system apart. Whereas a decent C programmer might not be good enough for a large project.

So the story is not if C is better than say Java but what is the best language for any given problem set. I find broad systems, like those found in the typical business, with many programmers of various skill levels are ideal for Java. But for deep system where you layer more and more difficulty on a single problem such as real-time robotic vision that C or C++ are far superior. A simple way to figure out what is the best language is to not compare strengths and weaknesses generally but how they apply to the problem at hand. In a large business system where horsepower is plentiful then garbage collection is good and pointers are only going to be a liability. But if you are pushing up to the limits of what the machine can do such as a game then a crazy pointer dance might be the only possible solution and thus demand C or even ASM.

Lastly do you want your OS programmed in Java?

[Sep 22, 2012] Debugging techniques

If you consider using printf debugging, please check out the use of assertions (see the section called Assertions: defensive programming) and of a debugger (see the section called The debugger); these are often much more effective and time-saving.

There are some circumstances where printf debugging is appropriate. If you want to use it, here are some tips:

Here is a nice way to do it. File debug.h:

#ifndef DEBUG_H
#define DEBUG_H
#include <stdarg.h>

#if defined(NDEBUG) && defined(__GNUC__)
/* gcc's cpp has extensions; it allows for macros with a variable number of
   arguments. We use this extension here to preprocess pmesg away. */
#define pmesg(level, format, args...) ((void)0)
#else
void pmesg(int level, char *format, ...);
/* print a message, if it is considered significant enough.
      Adapted from [K&R2], p. 174 */
#endif

#endif /* DEBUG_H */
        

File debug.c:

#include "debug.h"
#include <stdio.h>

extern int msglevel; /* the higher, the more messages... */

#if defined(NDEBUG) && defined(__GNUC__)
/* Nothing. pmesg has been "defined away" in debug.h already. */
#else
void pmesg(int level, char* format, ...) {
#ifdef NDEBUG
	/* Empty body, so a good compiler will optimise calls
	   to pmesg away */
#else
        va_list args;

        if (level>msglevel)
                return;

        va_start(args, format);
        vfprintf(stderr, format, args);
        va_end(args);
#endif /* NDEBUG */
#endif /* NDEBUG && __GNUC__ */
}

Here, msglevel is a global variable which you have to define, that controls how much debugging output is done. You can then use pmesg(100, "Foo is %l\n", foo) to print the value of foo in case msglevel is set to 100 or more.

Note that you can remove all this debugging code from your executable by adding -DNDEBUG to the preprocessor flags: for GCC, the preprocessor will remove it, and for other compilers pmesg will have an empty body, so that calls to it can be optimised away by the compiler. This trick was taken from assert.h; see the next section.

[Jul 09, 2012] Objective-C Overtakes C++, But C Is Number One

Slashdot

mikejuk writes "Although the TIOBE Index has its shortcomings, the finding that Objective-C has overtaken C++ is reiterated in the open source Transparent Language Popularity Index. The reason is, of course, that Objective-C is the language you have to use to create iOS applications - and as iPads and iPhones have risen in popularity, so has Objective-C. If you look at the raw charts then you can see that C++ has been in decline since about 2005 and Objective-C has shot up to overtake it with amazing growth. But the two charts are on different scales: if you plot both on the same chart, you can see that rather than rocketing up, Objective-C has just crawled its way past, and it is as much to do with the decline of C++. It simply hasn't reached the popularity of C++ in its heyday before 2005. However the real story is that C, a raw machine independent assembler-like language, with no pretense to be object oriented or sophisticated, has beaten all three of the object oriented heavy weights - Java, C++ and Objective C. Yes C is number one (and a close second in the transparent index)."

[Jul 05, 2012] Crowd Sourced Malware Reverse Engineering Platform Launched

Slashdot

wiredmikey writes "Security startup CrowdStrike has launched CrowdRE, a free platform that allows security researchers and analysts to collaborate on malware reverse engineering. CrowdRE is adapting the collaborative model common in the developer world to make it possible to reverse engineer malicious code more quickly and efficiently. Collaborative reverse engineering can take two approaches, where all the analysts are working at the same time and sharing all the information instantly, or in a distributed manner, where different people work on different sections and share the results. This means multiple people can work on different parts simultaneously and the results can be combined to gain a full picture of the malware. Google is planning to add CrowdRE integration to BinNavi, a graph-based reverse engineering tool for malware analysis, and the plan is to integrate with other similar tools. Linux and Mac OS support is expected soon, as well."

[Dec 04, 2011] The International Obfuscated C Code Contest

The 20th International Obfuscated C Code Contest is open from
12-Nov-2011 11:00:00 UTC to 12-Jan-2012 12:12:12 UTC.
  1. Read the IOCCC Goals and Rules.
  2. Review the IOCCC guidelines.
  3. The online submission tool is now available.
  4. Goals:
    • To write the most Obscure/Obfuscated C program under the rules below.
    • To show the importance of programming style, in an ironic way.
    • To stress C compilers with unusual code.
    • To illustrate some of the subtleties of the C language.
    • To provide a safe forum for poor C code. :-)
  5. News
    • 1 December 2011: The online submission tool is now available.
    • 1 December 2011: We strongly encourage the use of Markdown in submitted remarks and documentation!
    • 1 December 2011: The guidelines have been updated. Changes are marked with '|'.
    • 13 November 2011: Follow IOCCC announcements on Twitter. Follow ioccc announcements on Twitter
    • 12 November 2011: The 20th IOCCC is now open. Online submissions will be available 2011-12-01.
    • 12 November 2011: The 18th IOCCC and 19th IOCCC results are now online. please check the Years and Winners pages.
    • Older news has been archived, but is currently unavailable

[Oct 14, 2011] Dennis Ritchie, 70, Dies, Programming Trailblazer - by Steve Rohr

October 13, 2011 | NYTimes.com
Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.

Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill.

In the late 1960s and early '70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator.

The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world's data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple's iOS, in consumer computing devices.

"The tools that Dennis built - and their direct descendants - run pretty much everything today," said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.

Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late '60s and early '70s, minicomputers were moving into companies and universities - smaller and at a fraction of the price of hulking mainframes.

Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, "a system around which fellowship can form."

C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. "C is not a big language - it's clean, simple, elegant," Mr. Kernighan said. "It lets you get close to the machine, without getting tied up in the machine."

Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers.

C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, "The C Programming Language," also known as "K. & R." after the authors' initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages.

Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics.

While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. "But it was nearly 1968," Mr. Ritchie recalled in an interview in 2001, "and somehow making A-bombs for the government didn't seem in tune with the times."

Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering.

"We were very complementary," said Mr. Thompson, who is now an engineer at Google. "Sometimes personalities clash, and sometimes they meld. It was just good with Dennis."

Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England.

Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007.

Colleagues who worked with Mr. Ritchie were struck by his code - meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. "There was a remarkable precision to his writing," Mr. Kernighan said, "no extra words, elegant and spare, much like his code."

CINT

CINT is an interpreter for C and C++ code. It is useful e.g. for situations where rapid development is more important than execution time. Using an interpreter the compile and link cycle is dramatically reduced facilitating rapid development. CINT makes C/C++ programming enjoyable even for part-time programmers.

CINT is written in C++ itself, with slightly less than 400,000 lines of code. It is used in production by several companies in the banking, integrated devices, and even gaming environment, and of course by ROOT, making it the default interpreter for a large number of high energy physicists all over the world.

Features

CINT covers most of ANSI C (mostly before C99) and ISO C++ 2003. A CINT script can call compiled classes/functions and compiled code can make callbacks to CINT interpreted functions. Utilities like makecint and rootcint automate the process of embedding compiled C/C++ library code as shared objects (as Dynamic Link Library, DLL, or shared library, .so). Source files and shared objects can be dynamically loaded/unloaded without stopping the CINT process. CINT offers a gdb like debugging environment for interpreted programs.

Download

CINT is free software in terms of charge and freedom of utilization: it is licensed under the X11/MIT license. See the included COPYING for details.

The source of CINT 5.18.00 from 2010-07-02 is available here (tar.gz, 2MB).

CINT 5.16.19 from 2007-03-19 is available via anonymous ftp:

To build the source package do:

$ tar xfz cint-5.16.19-source.tar.gz
$ cd cint-5.16.19
$ ./configure
$ gmake 

The current sources of CINT can be downloaded via subversion. From a bash shell (the $ is meant to denote the shell prompt), run

$ svn co http://root.cern.ch/svn/root/trunk/cint cint
$ cd cint

Once you have the sources you can simply update them by running svn up.

You can also download a certain version of CINT using subversion:

$ svn co http://root.cern.ch/svn/cint/tags/v5-16-19 cint-v5.16.19
$ cd cint-v5.16.19

You can build CINT from these sources by running

$ ./configure
$ make -j2

For windows you will need to install the cygwin package to build CINT from sources.

Before downloading check the release notes of the latest version.

Portability

CINT works on number of operating systems. Linux, HP-UX, SunOS, Solaris, AIX, Alpha-OSF, IRIX, FreeBSD, NetBSD, NEC EWS4800, NewsOS, BeBox, HI-UX, Windows-NT/95/98/Me, MS-DOS, MacOS, VMS, NextStep, Convex have all been reported as working at some point in time; Linux, Windows, MacOS and Solaris are actively developed. A number of compilers is supported, i.e. GCC, Microsoft Visual C++, Intel's ICC, HP-CC/aCC, Sun-CC/CC5, IBM-xlC, Compac-cxx, SGI-CC, Borland-C++, Symantec-C++, DJGPP, cygwin-GCC.

The ROOT System

The ROOT system embeds CINT to be able to execute C++ scripts and C++ command line input. CINT also provides extensive RTTI capabilities to ROOT. See the following chapters on how CINT is used in ROOT:

The Authors

CINT is developed by Masaharu Goto, who works for Agilent Technologies, Philippe Canal and Paul Russo at Fermilab, and Leandro Franco, Diego Marcos, and Axel Naumann from CERN.

Limitations

CINT implements a very large subset of C++, but has also some differences and limitations. We have developed a new version of CINT where most of these limitations have been be removed. This new version is called CINT 7 (aka "new core"); it is the now the default for stand-alone CINT installations.

CINT Mailing List

[email protected] is the CINT mailing list. In order to subscribe, send an e-mail to [email protected] containing a line 'subscribe cint [preferred mail address]' where [preferred mail address] is an option. The archive of the mailing list is also available; you can also find it on Nabble.com.

For more detailed CINT information see below:

[Aug 25, 2010] Sometimes the Old Ways Are Best by Brian Kernighan

IEEE Software Nov/Dec 2008, pp.18-19

As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it. One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows XP. The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself relying on much the same toolset for these very different tasks.

... ... ..

here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time. But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past. On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

[Aug 22, 2010] Beginner's Guide to Linkers

This article is intended to help C & C++ programmers understand the essentials of what the linker does. I've explained this to a number of colleagues over the years, so I decided it was time to write it down so that it's more widely available (and so that I don't have to explain it again). [Updated March 2009 to include more information on the pecularities of linking on Windows, plus some clarification on the one definition rule.]

[Nov 19, 2009] nweb a tiny, safe Web server (static pages only)

Have you ever wanted to run a tiny, safe Web server without worrying about using a fully blown Web server that could cause security issues? Do you wonder how to write a program that accepts incoming messages with a network socket? Have you ever just wanted your own Web server to experiment and learn with?

Well, look no further -- nweb is what you need. This is a simple Web server that has only 200 lines of C source code. It runs as a regular user and can't run any server-side scripts or programs, so it can't open up any special privileges or security holes.

This article covers:

nweb only transmits the following types of files to the browser :

If your favorite static file type is not in this list, you can simply add it in the source code and recompile to allow it.

[Aug 3, 2009] Goanna 1.0.2

Goanna is an Eclipse plugin that does static analysis of C/C++ source code with model checking. It detects many instances of null pointer de-referencing, double free(), buffer overruns, uninitialized variables,... and other common programming mistakes...

[Dec 9, 2008] Slashdot What Programming Language For Linux Development

C first, then whatever you want

(Score:5, Insightful)

by darkwing_bmf (178021) <nelsonchandler@ya[ ].com ['hoo' in gap] on Saturday December 06, @07:47PM (#26016309)

C first. It is the lingua franca of the Unix world. Even if you don't use it for yourself, you have to understand it because so much is written in it. And if you don't understand it, no one will take you seriously. One of my first Linux installs was so I could teach myself C cheaply and I needed a free, as in beer, compiler.

Then after that, any language that you think might be interesting. Try multiple languages. I personally like Ada and there's a free GNAT Ada compiler for Linux.

[Nov 21, 2008] GCC hacks in the Linux kernel

The available C extensions can be classified in several ways. This article puts them in two broad categories:

Functionality extensions

Let's start by exploring some of the GCC tricks that extend the standard C language.

Type discovery

GCC permits the identification of a type through the reference to a variable. This kind of operation permits a form of what's commonly referred to as generic programming. Similar functionality can be found in many modern programming languages such as C++, Ada, and the Java™ language. Linux uses typeof to build type-dependent operations such as min and max. Listing 1 shows how you can use typeof to build a generic macro (from ./linux/include/linux/kernel.h).


Listing 1. Using typeof to build a generic macro
	
#define min(x, y) ({				\
	typeof(x) _min1 = (x);			\
	typeof(y) _min2 = (y);			\
	(void) (&_min1 == &_min2);		\
	_min1 < _min2 ? _min1 : _min2; })

Range extension

GCC includes support for ranges, which can be put to use in many areas of the C language. One of those areas is on case statements within switch/case blocks. In complex conditional structures, you might typically depend on cascades of if statements to achieve the same result that is represented more elegantly in Listing 2 (from ./linux/drivers/scsi/sd.c). The use of switch/case also enables compiler optimization by using a jump table implementation.


Listing 2. Using ranges within case statements
	
static int sd_major(int major_idx)
{
	switch (major_idx) {
	case 0:
		return SCSI_DISK0_MAJOR;
	case 1 ... 7:
		return SCSI_DISK1_MAJOR + major_idx - 1;
	case 8 ... 15:
		return SCSI_DISK8_MAJOR + major_idx - 8;
	default:
		BUG();
		return 0;	/* shut up gcc */
	}
}

Ranges can also be used for initialization, as shown below (from ./linux/arch/cris/arch-v32/kernel/smp.c). In this example, an array is created of spinlock_t with a size of LOCK_COUNT. Each element of the array is initialized with the value SPIN_LOCK_UNLOCKED.

/* Vector of locks used for various atomic operations */
spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};

Ranges also support more complex initializations. For example, the following code specifies initial values for sub-ranges of an array.

int widths[] = { [0 ... 9] = 1, [10 ... 99] = 2, [100] = 3 };
Zero-length arrays

In standard C, at least one element of an array must be defined. This requirement tends to complicate code design. However, GCC supports the concept of zero-length arrays, which can be particularly useful for structure definitions. This concept is similar to the flexible array member in ISO C99, but it uses a different syntax.

The following example declares an array with zero members at the end of a structure (from ./linux/drivers/ieee1394/raw1394-private.h). This allows the element in the structure to reference memory that follows and is contiguous with the structure instance. You may find this useful in cases where you need to have a variable number of array members.

struct iso_block_store {
        atomic_t refcount;
        size_t data_size;
        quadlet_t data[0];
};

Determining call address

In many instances, you may find it useful or necessary to determine the caller of a given function. GCC provides the built-in function __builtin_return_address for just this purpose. This function is commonly used for debugging, but it has many other uses within the kernel.

As shown in the code below, __builtin_return_address takes an argument called level. The argument defines the level of the call stack for which you want to obtain the return address. For example, if you specify a level of 0, you are requesting the return address of the current function. If you specify a level of 1, you are requesting the return address of the calling function (and so on).

void * __builtin_return_address( unsigned int level );

The local_bh_disable function in the following example (from ./linux/kernel/softirq.c) disables soft interrupts on the local processor to prevent softirqs, tasklets, and bottom halves from running on the current processor. The return address is captured using __builtin_return_address so that it can be used for later tracing purposes.

void local_bh_disable(void)
{
        __local_bh_disable((unsigned long)__builtin_return_address(0));
}
Constant detection

GCC provides a built-in function that you can use to determine whether a value is a constant at compile-time. This is valuable information because you can construct expressions that can be optimized through constant folding. The __builtin_constant_p function is used to test for constants.

The prototype for __builtin_constant_p is shown below. Note that __builtin_constant_p cannot verify all constants, because some are not easily proven by GCC.

int __builtin_constant_p( exp )

Linux uses constant detection quite frequently. In the example shown in Listing 3 (from ./linux/include/linux/log2.h), constant detection is used to optimize the roundup_pow_of_two macro. If the expression can be verified as a constant, then a constant expression (which is available for optimization) is used. Otherwise, if the expression is not a constant, another macro function is called to round up the value to a power of two.


Listing 3. Constant detection to optimize a macro function
	
#define roundup_pow_of_two(n)			\
(						\
	__builtin_constant_p(n) ? (		\
		(n == 1) ? 1 :			\
		(1UL << (ilog2((n) - 1) + 1))	\
				   ) :		\
	__roundup_pow_of_two(n)			\
)
Function attributes

GCC provides a variety of function-level attributes that allow you to provide more data to the compiler to assist in the optimization process. This section describes some of these attributes that are associated with functionality. The next section describes attributes that affect optimization.

As shown in Listing 4, the attributes are aliased by other symbolic definitions. You can use this as a guide to help read the source references that demonstrate the use of the attributes (as defined in ./linux/include/linux/compiler-gcc3.h).


Listing 4. Function attribute definitions
	
# define __inline__     __inline__      __attribute__((always_inline))
# define __deprecated           __attribute__((deprecated))
# define __attribute_used__     __attribute__((__used__))
# define __attribute_const__     __attribute__((__const__))
# define __must_check            __attribute__((warn_unused_result))

The definitions shown in Listing 4 reflect some of the function attributes available in GCC. They are also some of the most useful function attributes in the Linux kernel. Following are explanations of how you can best use these attributes:

Following are examples of these function being used in the Linux kernel. The deprecated example comes from the architecture non-specific kernel (./linux/kernel/resource.c), and the const example comes from the IA64 kernel source (./linux/arch/ia64/kernel/unwind.c).

int __deprecated __check_region(struct resource 
    *parent, unsigned long start, unsigned long n)

static enum unw_register_index __attribute_const__ 
    decode_abreg(unsigned char abreg, int memory)

Using Inline in Perl

The new Inline module for Perl allows you to write code in other languages (like C, Python, Tcl, or Java) and toss it into Perl scripts with wild abandon. Unlike previous ways of interfacing C code with Perl, Inline is very easy to use, and very much in keeping with the Perl philosophy. One extremely useful application of Inline is to write quick wrapper code around a C-language library to use it from Perl, thus turning Perl into (as far as I'm concerned) the best testing platform on the planet.

Perl has always been pathetically eclectic, but until now it hasn't been terribly easy to make it work with other languages or with libraries that weren't constructed specifically for it. You had to write interface code in the XS language (or get SWIG to do that for you), build an organized module, and generally keep track of a whole lot of details.

But now things have changed. The Inline module, written and actively (very actively) maintained by Brian Ingerson, provides facilities to bind other languages to Perl. In addition its sub-modules (Inline::C, Inline::Python, Inline::Tcl, Inline::Java, Inline::Foo, etc.) allow you to embed those languages directly in Perl files, where they will be found, built, and dynaloaded into Perl in a completely transparent manner. The user of your script will never know the difference, except that the first invocation of Inline-enabled code takes a little time to complete the compilation of the embedded code.

The world's simplest Inline::C program

Just to show you what I mean, let's look at the simplest possible Inline program; this uses an embedded C function, but you can do substantially the same thing with any other language that has Inline support.


Listing 1. Inline "Hello, world"
>
use Inline C => <<'END_C';

void greet() {
  printf("Hello, world!
");
}
END_C

greet;

Naturally, what the code does is obvious. It defines a C-language function to do the expected action, and then it treats it as a Perl function thereafter. In other words, Inline does exactly what an extension module should do. The question that may be uppermost in your mind is, "How does it do that?". The answer is pretty much what you'd expect: it takes your C code, builds an XS file around it in the same way that a human extension module writer would, builds that module, then loads it. Subsequent invocations of the code will simply find the pre-built module already there, and load it directly.

You can even invoke Inline at runtime by using the Inline->bind function. I don't want to do anything more than dangle that tantalizing fact before you, because there's nothing special about it besides the point that you can do it if you want to.

[May 13, 2008] cstring 3.4.4 by Dr Proctor

About: cstring is a small and simple platform-independent C library for the definition and manipulation of expandable C-style strings. Strings are represented as instances of the cstring_t structure, and manipulated by the library's functions. Its features include selection of different allocator pools, mapping cstring_t instances as views onto existing memory areas, efficient work-ahead memory optimization, and minimal link requirements.

Changes: This release incorporates support for the Safe String library and for the Win64 platform.

[Apr 25, 2008] Interview with Donald Knuth by Donald E. Knuth,Andrew Binstock

A very important interview. See, especially notes on multicore computers and literate programming...

Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren't widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code-both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?

Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn't surprised me during the past several decades. But it still hasn't reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.

For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.

Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp-for some reason, I really don't know why! I'm quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.

Remember, though, that my opinion on economic questions is highly suspect, since I'm just an educator and scientist. I understand almost nothing about the marketplace.

Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today's developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development?

Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here's what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth's ALGOL W system (the predecessor of Pascal). My program didn't work the first time, but fortunately I could use Ed Satterthwaite's excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn't get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn't a fair contest.

As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I'm feeling my way in a totally unknown environment and need feedback about what works and what doesn't. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."

Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you've published for Volume 4 of The Art of Computer Programming (TAOCP) doesn't seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you're currently working on?

Donald: The field of combinatorial algorithms is so vast that I'll be lucky to pack its sequential aspects into three or four physical volumes, and I don't think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

Donald: I don't want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won't be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific-until it turned out that the wished-for compilers were basically impossible to write.

Let me put it this way: During the past 50 years, I've written well over a thousand programs, many of which have substantial size. I can't think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1]

How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I'm wrong.

I know that important applications for parallelism exist-rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.

Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)

The machine I use today has dual processors. I get to use them both only when I'm running two independent jobs at the same time; that's nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn't be any better off, considering the kind of work I do-even though I'm using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it's a pipe dream. (No-that's the wrong metaphor! "Pipelines" actually work for me, but threads don't. Maybe the word I want is "bubble.")

From the opposite point of view, I do grant that web browsing probably will get better with multicores. I've been talking about my technical work, however, not recreation. I also admit that I haven't got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they've begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most-at the cost of incompatibility with legacy x86 programs.)

Andrew: One of the few projects of yours that hasn't been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn't catch on? And is there anything you'd have done differently in retrospect regarding literate programming?

Donald: Literate programming is a very personal thing. I think it's terrific, but that might well be because I'm a very strange person. It has tens of thousands of fans, but not millions.

In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay-I'd give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they're understood by a vast community of programmers, most people have no big incentive to change, just as I'm not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).

Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn't taken the whole world by storm. He observed that a small percentage of the world's population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.

Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s-it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I've ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.

If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don't believe in forcing ideas on anybody. If literate programming isn't your style, please forget it and do what you like. If nobody likes it but me, let it die.

On a positive note, I've been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days.

Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I've read portions of both books, but can't tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it's a pure synopsis. Could you clarify?

Donald: Volume 1 Fascicle 1 is a programmer's introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they'll ever need or want to know.

I should point out, however, that MMIX isn't a single machine; it's an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It's a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch).

Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens.

Andrew: As I suspect you don't use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it?

Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I've needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they're all pretty much toy programs-nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I've written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book).

That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite's system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools.

Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change?

Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I've always encouraged further development-except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts.

Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren't an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion?

Donald: That's another good question! I've discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.)

Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage-an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work?

Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it's a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.

In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).

Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding?

Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica.

I program every algorithm that's discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I've been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file.

According to the current directories on my machine, I've written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I've made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life.

I currently use Ubuntu Linux, on a standalone laptop-it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own.

Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It's clear from the text that you're really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015?

Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant.

You're certainly correct that I'm having a ball writing up this material, because I keep running into fascinating facts that simply can't be left out-even though more than half of my notes don't make the final cut.

Precise time estimates are impossible, because I can't tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift.

From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I've been collecting and filing for 45 years. I gain important time by working in batch mode: I don't read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it's fundamental and deserves weeks of study; then I'd have to edit my website and push that number 2015 closer to infinity.

Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today?

Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I've entered them into my computer.

On the other hand, I willingly put myself in God's hands with respect to how much more I'll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I'll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I'm able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I've collected and made notes on since 1962.

Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn't do something simply because it's trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn't adopt simply because they're currently popular or because that's the way they're currently done? Would you care to identify important examples of this outside of software development?

Donald: Hmm. That question is almost contradictory, because I'm basically advising young people to listen to themselves rather than to others, and I'm one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day.

Still, I hate to duck your questions even though I also hate to offend other people's sensibilities-given that software methodology has always been akin to religion. With the caveat that there's no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I've ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other's code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you're totally convinced that reusable code is wonderful, I probably won't be able to sway you anyway, but you'll never convince me that reusable code isn't mostly a menace.

Here's a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn't ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can't be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.)

[Apr 24, 2008] Xcoral 3.47 by Lionel Fournigault

About: Xcoral is a multi-window mouse-based text editor for Unix/X11 with syntax highlighting and auto-indentation. A built-in browser enables you to navigate through C functions, C++ and Java classes, methods, files, and attributes. This browser is very fast and self-updates automatically after file modifications. An ANSI C Interpreter (Smac) is also built-in to dynamically extend the editor's facilities (with user functions, keybindings, modes, etc).

Changes: Bugfixes.

[Feb 4, 2008] Sunifdef 3.1.3 (Stable) by Mike Kinghan

About: Sunifdef is a command line tool for eliminating superfluous preprocessor clutter from C and C++ source files. It is a more powerful successor to the FreeBSD 'unifdef' tool. Sunifdef is most useful to developers of constantly evolving products with large code bases, where preprocessor conditionals are used to configure the feature sets, APIs or implementations of different releases. In these environments, the code base steadily accumulates #ifdef-pollution as transient configuration options become obsolete. Sunifdef can largely automate the recurrent task of purging redundant #if logic from the code.

Changes: Six bugs are fixed in this release. Five of these fixes tackle longstanding defects of sunifdef's parsing and evaluation of integer constants, a niche that has received little scrutiny since the tool branched from unifdef. This version provides robust parsing of hex, decimal, and octal numerals and arithmetic on them. However, sunifdef still evaluates all integer constants as ints and performs signed integer arithmetic upon them. This falls short of emulating the C preprocessor's arithmetic in limit cases, which is an unfixed defect.

[Feb 4, 2008] Automated Testing Framework 0.4

About: ATF is a collection of libraries and utilities designed to ease unattended application testing in the hands of developers and end users of a specific piece of software. Tests can currently be written in C/C++ or POSIX shell and, contrary to other testing frameworks, ATF tests are installed into the system alongside any other application files. This allows the end user to easily verify that the software behaves correctly on her system. Furthermore, the results of the test suites can be collected into nicely-formatted reports to simplify their visualization and analysis.

Changes: This release adds preliminary documentation on the C++ and shell interfaces to write tests, mainly directed to developers wishing to adopt ATF. It adds a way to specify required architectures and machines for given tests through the require. arch and require.machine properties; if the platform running the tests does not fulfill the requirements, the tests are simply skipped. It adds the ability to limit the maximum time a test case can last through the timeout property, killing tests that get stalled. There are many portability fixes, especially to SunOS, and small improvements all around.

[Nov 19, 2007] freshmeat.net Project details for Simplified Wrapper and Interface Generator

SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages. SWIG is primarily used with common scripting languages such as Perl, PHP, Python, Tcl/Tk, and Ruby, however the list of supported languages also includes non-scripting languages such as C#, Common Lisp (CLISP, Allegro CL, UFFI), Java, Modula-3, OCAML, and R. Also several interpreted and compiled Scheme implementations (Guile, MzScheme, Chicken) are supported. SWIG is most commonly used to create high-level interpreted or compiled programming environments, user interfaces, and as a tool for testing and prototyping C/C++ software. SWIG can also export its parse tree in the form of XML and Lisp s-expressions.

Release focus: Minor feature enhancements

Changes:
shared_ptr support was added for Java and C#. STL support for Ruby was enhanced. Windows support for R was added. A long-standing memory leak in the PHP module was fixed. Numerous fixes and minor enhancements were made for Allegrocl, C#, cffi, Chicken, Guile, Java, Lua, Ocaml, Perl, PHP, Python, Ruby, and Tcl. Warning support was improved.

[Feb 20, 2007] Dakshina`s Blog Weblog

Getting the output of a shell command from a C program using popen

Sometimes its necessary to access the output of a shell command (more than just the return value) in a C program. One way could be to redirect it to a file and then access it .The other would be by using the popen function.

#include<stdio.h>
main(){
char cmd[80];
FILE *fptr;
char out[256];
int ret;
strcpy(cmd,"ls -l");
fptr = popen(cmd, "r");
while(1){
fgets(out, 256, fptr);
if(feof(fptr)) break;
puts(out);
}
ret = pclose(fptr);
}

/* Noted tested with S10 gcc only ..*/

[Jan 15, 2007] Splint Home Page

Splint is a tool for statically checking C programs for security vulnerabilities and coding mistakes. With minimal effort, Splint can be used as a better lint. If additional effort is invested adding annotations to programs, Splint can perform stronger checking than can be done by any standard lint.

[Nov 26, 2006] Dru's Blog " Error Handling - Error Codes, Exceptions and Beyond

About 10 months ago, I was writing a library. As I was writing it, I started to look at the whole issue of notifying the caller of errors. In typical fashion, I tried to optimize the error handling problem rather than just do the right thing, and just use error codes. I did a ton of research. Here is a current list of links and articles on the subject.

Getting Started

To get you started here are some good starting points. They both received a lot of attention on the internet.

A colorful post by Damien Katz.

A nice opinion piece that is pro-error codes by the famous Joel of Joel on Software.

Read my original post with excellent comments by Daniel Lyons, Paul Clegg, and Neville of the North.

Nutshell

The default and standard way of handling errors since the begining is to just use error codes with some convention of noticing them. For example, you could document the error condition with an api and then set a global variable for the actual code. It is up to the programmer calling the function to notice the error and do the right thing.

This is the technique used by operating systems and most libraries. Historically, these systems have never been consistent or compatable with other conventions. The most evolved system for this would probably be the Microsoft COM system. All functions return an HRESULT, which is essentially an error code.

The next system was the 'exception-handling' system. In this system errors cannot be ingored. Exception handlers are declared, optionally, at a given scope. If an exception is thrown (ie an error has occurred), handlers are searched up the stack until a matching handler is found.

IMHO, the exception system isn't used properly in 90% of the cases. There is a fine balance between a soft error and something exceptional. The syntax also tends to get in the way for even the simplest of errors. I agree that there should be errors that are not ignored, but there has to be a better way.

So, old skoolers are 'we use error codes, and we like them, dammit - aka, super disciplined programming, usually for real-time, embedded and smaller systems.

The new schoolers are, 'you have to be kidding about error-codes, use exceptions' - aks, yeah, we use exceptions, that is what the language gives us… and btw, no, we don't mind typing on our keyboards a lot

Somehow, there has to be a better way. Maybe it will be system or application, specific.

Moving On - Old / New Ideas

If you don't mind it being a C++ article, here is an amazing one from Andrei Alexandrescu and Petru Marginean. (Andrei is widely known for his great work on Policy Based design with C++, which is excellent) The artcle is well written and practical. In fact, the idea was so good, the language 'D' made it part of the language.

Here is an example:

void User::AddFriend(User& newFriend)
{
    friends_.push_back(&newFriend);
    try
    {
        pDB_->AddFriend(GetName(), newFriend.GetName());
    }
    catch (...)
    {
        friends_.pop_back();
        throw;
    }
}

10 lines, and this is for the super-simple example.

void User::AddFriend(User& newFriend)
{
    friends_.push_back(&newFriend);
    ScopeGuard guard = MakeObjGuard(friends_, &UserCont::pop_back);
    pDB_->AddFriend(GetName(), newFriend.GetName());
    guard.Dismiss();
}

In D it would look even cleaner:

void User::AddFriend(User& newFriend)
{
    friends_.push_back(&newFriend);
    scope(failure) friends_.pop_back();
    pDB_->AddFriend(GetName(), newFriend.GetName());
}

IMHO, I think exception handling will move more towards systems like this. Higher level, simpler and cleaner.

Other interesting systems are the ones developed for Common Lisp, Erlang, and Smalltalk. I'm sure Haskell has something to say about this as well.

The Common Lisp and Smalltalk ones are similar. Instead of forcing a mechanism like most exception handlers. These systems give the exception 'catcher' the choice of retry'ing or doing something different at the point of the exception. Very powerful.

Speaking of smalltalk, here is an excellent article called Subsystem Exception Handling in Smalltalk. I highly recommend it.

My Recomendation

If you are building a library, use error codes. Error codes are much easier to turn into exceptions by the language wrapper that will eventually be built on top.

When programming, don't get trapped into think about the little picture. A lot of these errors are just pawns in the grand scheme of assuring that you have all of your resources in place before you begin your task at hand. If you present your code in that manner, it will be much easier to understand for all parties.

More Links

Error Codes vs. Exceptions by Damien Katz.

opinion piece that is pro-error codes by the famous Joel of Joel on Software.

Read my original post with excellent comments by Daniel Lyons, Paul Clegg, and Neville of the North.

Microsoft COM

D Language - Exception Safe Programming

Subsystem Exception Handling in Smalltalk - nice section on history as well

http://www.gigamonkeys.com/book/beyond-exception-handling-conditions-and-restarts.html

A nice long thread on comp.lang.c++.moderated

*Slightly Wacky, But Neat *

http://www.halfbakery.com/idea/C20exception20handling_20macros http://www.nicemice.net/cexcept/ http://home.rochester.rr.com/bigbyofrocny/GEF/ http://www.on-time.com/ddj0011.htm

[Oct 29, 2006] Project details for doxygen by Dimitri van Heesch

Oct 29, 2006 | freshmeat.net

doxygen 1.5.1

About: Doxygen is a cross-platform, JavaDoc-like documentation system for C++, C, Objective-C, C#, Java, IDL, Python, and PHP. Doxygen can be used to generate an on-line class browser (in HTML) and/or an off-line reference manual (in LaTeX or RTF) from a set of source files. Doxygen can also be configured to extract the code-structure from undocumented source files. This includes dependency graphs, class diagrams and hyperlinked source code. This type of information can be very useful to quickly find your way in large source distributions.

Changes: This release fixes a number of bugs that could cause it to crash under certain conditions or produce invalid output.

[Oct 4, 2006] C Programming Tutorial Make and Makefiles

Make allows a programmer to easily keep track of a project by maintaining current versions of their programs from separate sources. Make can automate various tasks for you, not only compiling proper branch of source code from the project tree, but helping you automate other tasks, such as cleaning directories, organizing output, and even debugging.

[Sept 27, 2006] Stevey's Blog Rants Get Famous By Not Programming

I agree with your ramblings, although by chance I happen to have one counter-example - John Carmack of id Software. The first Quake really was an amazing technical achievement (real-time texture-mapped 3D graphics done in software that looked good on a Pentium 75?!?). And if you look at the source code (which you can download for free), it's some of the prettiest, easy-to-follow C code I've ever seen. And aside from a few interviews, Carmack hasn't written smack.

Brennen Bearnes said...
Re: Carmack, I think Quake and its relatives fall neatly into the category of frameworks or environments. Remember the ecosystem of Quake extensions, mods, etc.?

[Sept 9, 2006] Errors errno in UNIX programs

Error reporting in C programs

C is the most commonly used programming language on UNIX platforms. Despite the popularity of other languages on UNIX (such as Java™, C++, Python, or Perl), all of the application programming interfaces (APIs) of systems have been created for C. The standard C library, part of every C compiler suite, is the foundation upon which UNIX standards, such as Portable Operating System Interface (POSIX) and the Single UNIX Specification, were created.

When C and UNIX were developed in the early 1970s, the concept of exceptions, which interrupt the flow of an application when some condition occurs, was fairly new or non-existent. The libraries had to use other conventions for reporting errors.

While you're pouring over the C library, or almost any other UNIX library, you'll discover two common ways of reporting failures:

The errno global variable (or, more accurately, symbol, since on systems with a thread-safe C library, errno is actually a function or macro that ensures each thread has its own errno) is defined in the <errno.h> system header, along with all of its possible values defined as standard constants.

Many of the functions in the first category actually return one of the standard errno codes, but it's impossible to tell how a function behaves and what it returns without checking the Returns section of the manual page. If you're lucky, the function's man page lists all of its possible return values and what they mean in the context of this particular function. Third party libraries often have a single convention that's followed by all of the functions in the library but, again, you'll have to check the library's documentation before making any assumptions.

Let's take a quick look at some code demonstrating errno and a couple of functions that you can use to transform that error code into something more human-readable.

[Feb 14, 2006] Free Microsoft compilers

[Nov 9, 2005] 10 Things I Hate About UNIX

A primitive and misguided view on C; The main value of thier peace is that it contains most of the typical arguments that people who as no clue in software engineering attack the language. The author does not understand that for higher level language TCL or similar scripting languages should be used along not instead of C.

The C language was written to enable UNIX to be portable. It's designed to produce good code for the PDP-11, and very closely maps to that machine's capabilities. There's no support for concurrency in C, for example. In a modern language such as Erlang, primitives exist in the language for creating different threads of execution and sending messages between them. This is very important today, when it's a lot cheaper to buy two computers than one that's twice as fast.

C also lacks a number of other features present in modern languages. The most obvious is lack of support for strings. The lack of bounds-testing on arrays is another example-one responsible for a large number of security holes in UNIX software. Another aspect of C that's responsible for several security holes is the fact that integers in C have a fixed size-if you try to store something that doesn't fit, you get an overflow. Unfortunately, this overflow isn't handled nicely. In Smalltalk, the overflow would be caught transparently to the developer and the integer increased in size to fit it. In other low-level languages, the assignment would generate an error that could be handled by the program. In C, it's silently ignored. And how big is the smallest value that won't fit in a C integer? Well, that's up to the implementation.

Next, we get to the woefully inadequate C preprocessor. The preprocessor in C works by very simple token substitution-it has no concept of the underlying structure of the code. One obvious example of the limitations of this setup is when you try adding control structures to the language. With Smalltalk, this is trivial-blocks of code in Smalltalk can be passed as arguments, so any message call can be a control statement. In LISP, the preprocessor can be used to encode design patterns, greatly reducing the amount of code needed. C can just about handle simple inline-function equivalents.

The real problem with C, however, is that it's the standard language for UNIX systems. All system calls and common libraries expose C functions, because C is the lowest common denominator-and C is very low. C was designed when the procedural paradigm was only just gaining acceptance, when Real Programmers used assembly languages and structured programming was something only people in universities cared about. If you want to create an object-oriented library on UNIX, you either expose it in the language in which it was written-forcing other developers to choose the same language as you-or you write a cumbersome wrapper in C. Hardly an ideal solution.

Making Wrong Code Look Wrong - Joel on Software

The main problem with C critics is not the C is perfect (it is far from being perfect), but that critics are ignorant. Joel rehashes old C warts without real understanding of solutions available. For example indent is one of the simplest solutions to "deceptive nesting" problem in C. BTW the problem was present even in languages with better, more flexible code blocks like PL/1 BTW PL/1 permits label on each closing bracket in order to match the opening bracket; it also permit multiple block closure with a singled labeled bracket like
a: begin; ... begin; ... begin ... end a; /* end a closes all 3 blocks */
Anyway here is his rant:

...As you get more proficient at writing code in a particular environment, you start to learn to see other things. Things that may be perfectly legal and perfectly OK according to the coding convention, but which make you worry.

For example, in C:

char* dest, src;

This is legal code; it may conform to your coding convention, and it may even be what was intended, but when you've had enough experience writing C code, you'll notice that this declares dest as a char pointer while declaring src as merely a char, and even if this might be what you wanted, it probably isn't. That code smells a little bit dirty.

Even more subtle:

if (i != 0)
foo(i);

In this case the code is 100% correct; it conforms to most coding conventions and there's nothing wrong with it, but the fact that the single-statement body of the if statement is not enclosed in braces may be bugging you, because you might be thinking in the back of your head, gosh, somebody might insert another line of code there

if (i != 0)
bar(i);
foo(i);

… and forget to add the braces, and thus accidentally make foo(i)unconditional! So when you see blocks of code that aren't in braces, you might sense just a tiny, wee, soupçon of uncleanliness which makes you uneasy.

OK, so far I've mentioned three levels of achievement as a programmer:

  1. You don't know clean from unclean.
  2. You have a superficial idea of cleanliness, mostly at the level of conformance to coding conventions.
  3. You start to smell subtle hints of uncleanliness beneath the surface and they bug you enough to reach out and fix the code.

There's an even higher level, though, which is what I really want to talk about:

4. You deliberately architect your code in such a way that your nose for uncleanliness makes your code more likely to be correct.

This is the real art: making robust code by literally inventing conventions that make errors stand out on the screen.

So now I'll walk you through a little example, and then I'll show you a general rule you can use for inventing these code-robustness conventions, and in the end it will lead to a defense of a certain type of Hungarian Notation, probably not the type that makes people carsick, though, and a criticism of exceptions in certain circumstances, though probably not the kind of circumstances you find yourself in most of the time.

But if you're so convinced that Hungarian Notation is a Bad Thing and that exceptions are the best invention since the chocolate milkshake and you don't even want to hear any other opinions, well, head on over to Rory's and read the excellent comix instead; you probably won't be missing much here anyway; in fact in a minute I'm going to have actual code samples which are likely to put you to sleep even before they get a chance to make you angry. Yep. I think the plan will be to lull you almost completely to sleep and then to sneak the Hungarian=good, Exceptions=bad thing on you when you're sleepy and not really putting up much of a fight.

An Example

Right. On with the example. Let's pretend that you're building some kind of a web-based application, since those seem to be all the rage with the kids these days.

Now, there's a security vulnerability called the Cross Site Scripting Vulnerability, a.k.a. XSS. I won't go into the details here: all you have to know is that when you build a web application you have to be careful never to repeat back any strings that the user types into forms.

So for example if you have a web page that says "What is your name?" with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user's name is Elmer), well, that's a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of "Elmer" and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil's evil site.

Let's put it in pseudocode. Imagine that

s = Request("name")

reads input (a POST argument) from the HTML form. If you ever write this code:

Write "Hello, " & Request("name")

your site is already vulnerable to XSS attacks. That's all it takes.

Instead you have to encode it before you copy it back into the HTML. Encoding it means replacing " with &quot;, replacing > with &gt;, and so forth. So

Write "Hello, " & Encode(Request("name"))

is perfectly safe.

All strings that originate from the user are unsafe. Any unsafe string must not be output without encoding it.

Let's try to come up with a coding convention that will ensure that if you ever make this mistake, the code will just look wrong. If wrong code, at least, looks wrong, then it has a fighting chance of getting caught by someone working on that code or reviewing that code.

V IDE

V IDE works with GNU g++, Borland C++ 5.5 and Java and runs on Windows and Linux. It includes a syntax highlighting editor for C/C++, Java, Perl, Fortran, TeX and HTML. It has a built-in code beautifier, macro support, ctags support, project manager, integrated support for the V applications generator and icon editor, integrated support for the GNU gdb and Sun's jdb (for Java), etc.

Slashdot Optimizations - Programmer vs. Compiler

Re:Clear Code (Score:5, Insightful)
by Rei (128717) on Friday February 25, @04:56PM (#11782241)
(http://www.cursor.org/)

An important lesson that I wish I had learned when I was younger ;) It is crazy to start optimizing before you know where your bottlenecks are. Don't guess - run a profiler. It's not hard, and you'll likely get some big surprises.

Another thing to remember is this: the compiler isn't stupid; don't pretend that it is. I had senior developers at an earlier job mad at me because I wasn't creating temporary variables for the limits of my loop indices (on unprofiled code, nonetheless!). It took actually digging up an article on the net to show that all modern compilers automatically dereference any const references (be they arrays, linked lists, const object functions, etc) before starting the loop.

Another example: function calls. I've heard some people be insistant that the way to speed up an inner loop is to remove the code from function calls so that you don't have function call overhead. No! Again, compilers will do this for you. As compilers were evolving, they added the "inline" keyword, which does this for you. Eventually, the compilers got smart enough that they started inlining code on their own when not specified and not inlining it when coders told it to be inline if it would be inefficient. Due to coder pressure, at least one compiler that I read about had an "inline damnit" (or something to that effect) keyword to force inlining when you're positive that you know better than the compiler ;)

Once again, the compiler isn't stupid. If an optimization seems "obvious" to you, odds are pretty good that the compiler will take care of it. Go for the non-obvious optimizations. Can you remove a loop from a nested set of loops by changing how you're representing your data? Can you replace a hack that you made with standard library code (which tends to be optimized like crazy)? Etc. Don't start dereferencing variables, removing the code from function calls, or things like this. The compiler will do this for you.

If possible, work with the compiler to help it. Use "restrict". Use "const". Give it whatever clues you can.

Time to post the famous Knuth quote... (Score:5, Informative)
by xlv (125699) on Friday February 25, @03:50PM (#11781223)
(http://dumbasastone.com/ | Last Journal: Thursday September 23, @10:06PM)

Donald Knuth wrote "We should forget about small efficiencies, about 97% of the time. Premature optimization is the root of all evil."

Write C for C programmers (Score:5, Insightful)
by swillden (191260) * on Friday February 25, @03:55PM (#11781306)

With regard to your example, I can't imagine any modern compiler wouldn't treat the two as equivalent.

However, in your example, I actually prefer "if (!ptr)" to "if (ptr == NULL)", for two reasons. First the latter is more error-prone, because you can accidentally end up with "if (ptr = NULL)". One common solution to avoid that problem is to write "if (NULL == ptr)", but that just doesn't read well to me. Another is to turn on warnings, and let your compiler point out code like that -- but that assumes a decent compiler.

The second, and more important, reason is that to anyone who's been writing C for a while, the compact representation is actually clearer because it's an instantly-recognizable idiom. To me, parsing the "ptr == NULL" format requires a few microseconds of thought to figure out what you're doing. "!ptr" requires none. There are a number of common idioms in C that are strange-looking at first, but soon become just another part of your programming vocabulary. IMO, if you're writing code in a given language, you should write it in the style that is most comfortable to other programmers in that language. I think proper use of idiomatic expressions *enhances* maintainability. Don't try to write Pascal in C, or Java in C++, or COBOL in, well, anything, but that's a separate issue :-)

Oh, and my answer to your more general question about whether or not you should try to write code that is easy for the compiler... no. Don't do that. Write code that is clear and readable to programmers and let the compiler do what it does. If profiling shows that a particular piece of code is too slow, then figure out how to optimize it, whether by tailoring the code, dropping down to assembler, or whatever. But not before.

Check out the LLVM demo page (Score:5, Interesting)
by sabre (79070) on Friday February 25, @03:58PM (#11781354)
(http://www.nondot.org/~sabre/)

LLVM is an aggressive compiler that is able to do many cool things. Best yet, it has a demo page here: http://llvm.org/demo [llvm.org], where you can try two different things and see how they compile.

One of the nice things about this is that the code is printed in a simple abstract assembly language that is easy to read and understand.

The compiler itself is very cool too btw, check it out.

If you're not willing to TIME it... (Score:4, Insightful)
by dpbsmith (263124) on Friday February 25, @04:30PM (#11781872)
(http://world.std.com/~dpbsmith)

...then the code isn't important enough to optimize. Plain and simple.

Never try to optimize anything unless you have measured the speed of the code before optimizing and have measured it again after optimizing.

Optimized code is almost always harder to understand, contains more possible code paths, and more likely to contain bugs than the most straightforward code. It's only worth it if it's really faster...

And you simply cannot tell whether it's faster unless you actually time it. It's absolutely mindboggling how often a change you are certain will speed up the code has no effect, or a truly negligible effect, or slows it down.

This has always been true. In these days of heavily optimized compilers and complex CPUs that are doing branch prediction and God knows what all, it is truer than ever. You cannot tell whether code is fast just by glancing at it. Well, maybe there are processor gurus who can accurately visualize the exact flow of all the bits through the pipeline, but I'm certainly not one of them.

A corollary is that since the optimized code is almost always trickier, harder to understand, and often contains more logic paths than the most straightforward code, you shouldn't optimize unless you are committed to spending the time to write a careful unit-test fixture that exercises everything tricky you've done, and write good comments in the code.

Premature Optimization (Score:5, Insightful)
by fizban (58094) <[email protected]> on Friday February 25, @05:06PM (#11782376)
(http://www.sophicstudios.com/)

Premature Optimization is the DEVIL! I repeat, it is the gosh darn DEVIL! Don't do it. Write clear code so that I don't have to spend days trying to figure out what you are trying to do.

The biggest mistake I see in my professional (and unprofessional) life is programmers who try to optimize their code is all sorts of "733+" ways, trying to "trick" the compiler into removing 1 or 2 lines of assembly, yet completely disregard that they are using a map instead of a hash_map, or doing a linear search when they could do a binary search, or doing the same lookup multiple times, when they could do it just once. It's just silly, and goes to show that lots of programmers don't know how to optimize effectively.

Compilers are good. They optimize code well. Don't try to help them out unless you know your code has a definite bottleneck in a tight loop that needs hand tuning. Focus on using correct algorithms and designing your code from a high level to process data efficiently. Write your code in a clear and easy to read manner, so that you or some other programmer can easily figure out what's going on a few months down the line when you need to add fixes or new functionality. These are the ways to build efficient and maintainable systems, not by writing stuff that you could enter in an obfuscated code contest.

valgrind (Score:4, Informative)
by cyco/mico (178541) on Friday February 25, @05:12PM (#11782431)

If in doubt, use valgrind and kcachegrind [sourceforge.net]. One run with callgrind gives you all the information you want:

callgrind/kcachegrind is by far the easiest profiling solution I ever tried, and it seems answer more or less all of your questions.

Rules for writing fast code (aka optimization) (Score:4, Insightful)
by MSBob (307239) on Friday February 25, @05:56PM (#11782860)

First: Avoid doing what you don't have to do. Sounds obvious but I rarely see code that does the absolute minimum it needs to. Most of the code I've seen to date seems to precalculate too much stuff, read too much data from external storage, redraw too much stuff on screen etc...

Second: Do it later. There are thousands of situations where you can postpone the actual computations. Imagine writing a Matrix class with the invert() method. You can actually postpone calculating the inverse of the matrix until there is a call to access on of the fields in the matrix. Also you can calculate only the field being accessed. Or at some sensible threshold you may assume that the user code will read the entire inverted matrix and you can just calculate the remaining inverted fields... the options are endless.

Most string class implementations already make good use of this rule by only copying their buffers only when the "copied" buffer changes.

Third: Apply minimum algorithmic complexity. If you can use a hashmap instead of a treemap use the hash version it's O(1) vs Olog(n). Use quicksort for just about any kind of sorting you need to do.

Fourth: Cache your data. Download or buy a good caching class or use some facilities your language provides (eg. Java SoftReference class) for basic caching. There are some enormous performance gains that can be realized with smart caching strategies.

Fifth: Optimize using your language constructs. User the register keyword, use language idioms that you know compile into faster code etc... Scratch this rule! If you're applying rules one to four you can forget about this one and still have fast AND readable code.

The never overused example that I have (Score:4, Informative)
by roman_mir (125474) on Friday February 25, @05:56PM (#11782867)
(http://slashdot.org/ | Last Journal: Monday December 08, @11:44AM)

I got this job as a contractor 4 years ago now where the project was developed by over 30 junior developers and one crazy overpaid lady (hey, Julia,) who wouldn't let people touch her code so fragile it was (and it was the main action executor,) she would rather fight you for hours than make one change in the code (she left 2 months before the project release.) Now, I have never witnessed such monstrocity of a code base before - the business rules were redefined about once every 2 weeks dor 1.5 years straight. You can imagine.

So, the client decided not to pay the last million of dollars because the performance was total shit. On a weblogic cluster of 2 Sun E45s they could only achieve 12 concurrent transactions per second. So the client decided they really did not want to pay and asked us to make it at least 200 concurrent transactions per second on the same hardware. If I may make a wild guess, I would say the client really did not want to pay the last million, no matter what, so they upped the numbers a bit from what they needed. But anyway.

Myself and another developer (hi, Paul) spent 1.5 months - removing unnecessary db calls (the app was incremental, every page would ask you more questions that needed to be stored, but the app would store all questions from all pages every time,) cached XML DOM trees instead of reparsing them on every request, removed most of the session object, reduced it from 1Mb to about 8Kb, removed some totally unnecessary and bizarre code (the app still worked,) desynchronized some of the calls with a message queue etc.

At the end the app was doing 320 and over concurrent transactions per second. The company got their last million.

The lesson? Build software that is really unoptimized first and then save everyone's ass by optimizing this piece of shit and earn total admiration of the management - you are a miracle worker now.

The reality? Don't bother trying to optimize code when the business requirements are constantly changing, the management has no idea how to manage an IT dep't, the coders are so nube - there is a scent of freshness in the air and there is a crazy deadline right in front of you. Don't optimize, if the performance becomes an issue, optimize then.

Highlight UnMatched Brackets - Capture those unmatched brackets while u r still in insert-mode vim online

Its really irksome when your compiler complains for any unmatched "{" or "(" or "[". With this plugin you can highlight all those unmatched "{" or "(" or "[" as you type. This helps you to keep track of where the exact closing bracket should come. This plugin also warns you of any extra "}" or ")" or "]" you typed.

Customization:
- Specifying Additional Bracket-pairs.
User can specify additional matching pairs in
the global option 'matchpairs', see :help 'matchpairs'
For eg: set mps+=<:> (to include <> pair)
put the above setting in .vimrc file and restart vim.
- To get rid of highlighting when you quit insert
mode, add this mapping in your vimrc
noremap! <Esc> <Esc>:match NONE<CR>

To test how this plugin works type something like
{
( ) [ ]
( ( ( ) ) )
}

Happy vimming.

ShowFunc.vim - Creates a list of all tags - functions from a window, all windows or buffers. vim online

This script creates a hyperlink list of all the tags (i.e. functions, subroutines, classes, macros or procedures) from a single buffer, all open windows or buffers and displays them in a dynamically sized cwindow.

Supported File types with Exuberant Ctags version 5.5.4 (and newer): Asm, Asp, Awk, Beta, C, C++, c#, Cobol, Eiffel, Erlang, Fortran, Java, Javascript, Lisp, Lua, Make, Pascal, Perl, PHP, Python, PL/SQL, REXX, Ruby, Scheme, Shell, SLang, SML, SQL, Tcl, Vera, Verilog, Vim, YACC......and any user defined (i.e. --regex-lang=) types.

Default Key Mappings:
<F1> Run scan and open cwindow.

To reassign add the following to your .vimrc:
map NewKey <Plug>ShowFunc
map! NewKey <Plug>ShowFunc
For example to change the <F1> mapping to <F7>
map <F7> <Plug>ShowFunc
map! <F7> <Plug>ShowFunc

ShowFunc Window commands:
c Close cwindow.
h Display help dialog.
r Refresh.
s Change file sort, results will appear in either alphabetical or file order. (Default: file order)
t Change scan type, results will be from either the current file, all open windows or all open buffers. (Default: all open buffers)

install details

Put this file in the vim plugins directory (~/.vim/plugin/) to load it automatically, or load it with :so ShowFunc.vim.

You need Exuberant CTags installed for this function to work.
Website: http://ctags.sourceforge.net/
Source: http://prdownloads.sourceforge.net/ctags/ctags-5.5.4.tar.gz
Redhat/Fedora RPM: http://prdownloads.sourceforge.net/ctags/ctags-5.5.4-1.i386.rpm
Debian: apt-get install exuberant-ctags

vimcommander - totalcommander-like two-panel tree file explorer for vim vim online

This is an adaptation of opsplorer (vimscript #362), intended to be more like the Total Comander (http://www.ghisler.com) file explorer.

This opens two panels of file explorers on the top half of the vim screen.

Targets for moving and copying defaults to the other panel, like totalcmd. TAB switches between panels.
Vimcommander keys are mostly totalcommander's:

F3 - view
F4 - edit
F5 - copy
F6 - move
F7 - create dir
F8 - del
Others: C-U, C-Left/C-Right, C-R, BS, DEL, C-H, etc.
Selection of files/dirs also works: INS, +, -. Then copy/move/del selected files.

Suggested binding is
noremap <silent> <F11> :cal VimCommanderToggle()<CR>

install details

Drop vimcommander.vim in ~/.vim/plugin
Put in you .vimrc a map to VimCommanderToggle():
noremap <silent> <F11> :cal VimCommanderToggle()<CR>

c.vim - Write C-C++ programs by inserting statements, idioms and comments. vim online

** Statement oriented editing of C / C++ programs
** Speed up writing new code considerably.
** Write code und comments with a professional appearance from the beginning.
** Use code snippets

- insertion of various types of comments (file prologue, function descriptions, file section headers
keyword comments, date, time, ... )
- insertion of empty control statements (if-else, while, do-while, switch, ... )
- insertion of various preprocessor directives
- insertion of C-idioms (enum+typedef, loops, complete main, empty function, file open dialogs, ... )
- insertion of C++ -idioms ( frames for simple classes and template classes, try-catch blocks,
file open dialogs, output manipulators, ios flags, ... )
- compile / link / run support for one-file projects (without a makefile)
- personalization of comments (name, email, ... )
- menus can be switched on and off (Tools menu)

[Jan 2, 2005] CRefVim - a C-reference manual especially designed for Vim vim online

The intention of this project is to provide a C-reference manual that can be viewed with Vim and accessed from within Vim. The C-reference is a reference, it is NOT a tutorial or a guide on how
to write C-programs. It is a quick reference to the functions and syntax of the standard C language.

[Jan 2, 2005] C-fold - Automates folding and unfolding C & C++ comments and code blocks. vim online

Automatically folds all blocks (i.e. { } ) in C and C++ and defines a function that performs a command e.g. zo on all folds beginning with text that matches a given pattern.

This allows for the following mappings defined by the plugin:

z[ - Opens all doxygen-style comments
z] - Closes all doxygen-style comments
z{ - Opens all code blocks (i.e. { })
z} - Closes all code blocks

install details

Extract the archive from your home directory. This will extract the following files:
.vim/plugins/cfold.vim
.vim/after/syntax/c.vim

Also requires the folding of Doxygen-style comments. This requires vimscript #5. This can be done easily by adding the 'fold' keyword to the end of the 'doxygenComment' region in the 'doxygen.vim' syntax file:

syn region doxygenComment start= ... keepend fold

Additional languages can be supported as appropriate (e.g. Java) by copying 'c.vim' and renaming to the syntax file for the language (e.g. java.vim).

Linux Online - The Linux Tips HOWTO Short Tips

I do a lot of C programming in my spare time, and I've taken the time to rig vi to be C friendly. Here's my .exrc:
set autoindent
set shiftwidth=4
set backspace=2
set ruler

What does this do? autoindent causes vi to automatically indent each line following the first one indented, shiftwidth sets the distance of ^T to 4 spaces, backspace sets the backspace mode, and ruler makes it display the line number. Remember, to go to a specific line number, say 20, use:

vi +20 myfile.c

2.18 Using ctags to ease programming.

Most hackers already have ctags on their computers, but don't use it. It can be very handy for editing specific functions. Suppose you have a function, in one of many source files in a directory for a program you're writing, and you want to edit this function for updates. We'll call this function foo(). You don't where it is in the source file, either. This is where ctags comes in handy. When run, ctags produces a file named tags in the current dir, which is a listing of all the functions, which files they're in and where they are in said files. The tags file looks like this:

ActiveIconManager       iconmgr.c       /^void ActiveIconManager(active)$/
AddDefaultBindings      add_window.c    /^AddDefaultBindings ()$/
AddEndResize    resize.c        /^AddEndResize(tmp_win)$/
AddFuncButton   menus.c /^Bool AddFuncButton (num, cont, mods, func, menu, item)$/
AddFuncKey      menus.c /^Bool AddFuncKey (name, cont, mods, func, menu, win_name, action)$/
AddIconManager  iconmgr.c       /^WList *AddIconManager(tmp_win)$/
AddIconRegion   icons.c /^AddIconRegion(geom, grav1, grav2, stepx, stepy)$/
AddStartResize  resize.c        /^AddStartResize(tmp_win, x, y, w, h)$/
AddToClientsList        workmgr.c       /^void AddToClientsList (workspace, client)$/
AddToList       list.c  /^AddToList(list_head, name, ptr)$/

To edit, say AddEndResize() in vim, run:

vim -t AddEndResize
This will bring the appropriate file up in the editor, with the cursor located at the beginning of the function.

[Jan 2, 2005] C-editing-with-VIM-HOWTO. See also Ctags code browsing framework

3.1. ctags

A Tag is a sort of placeholder. Tags are very useful in understanding and
editing C. Tags are a set of book-marks to each function in a C file. Tags
are very useful in jumping to the definition of a function from where it is
called and then jumping back.

Take the following example.


Figure 6. Tags Example

[tags]

Lets say that you are editing the function foo() and you come across the
function bar(). Now, to see what bar() does, one makes uses of Tags. One can
jump to the definition of bar() and then jump back later. If need be, one can
jump to another function called within bar() and back.

To use Tags one must first run the program ctags on all the source files.
This creates a file called tags. This file contains pointers to all the
function definitions and is used by VIM to take you to the function
definition.

The actual keystrokes for jumping to and fro are CTRL-] and CTRL-T. By
hitting CTRL-] in foo() at the place where bar() is called, takes the cursor
to the beginning of bar(). One can jump back from bar() to foo() by just
hitting CTRL-T.

ctags are called by
$ ctags options file(s)


To make a tags file from all the *.c files in the current directory all one
needs to say is
$ ctags *.c


In case of a source tree which contains C files in different sub directories,
one can call ctags in the root directory of the source tree with the -R
option and a tags file containing Tags to all functions in the source tree
will be created. For Example.
$ ctags -R *.c


There are many other options to use with ctags. These options are explained
in the man file for ctags.
-----------------------------------------------------------------------------

3.2. marks

Marks are place-holders like Tags. However, marks can be set at any point in
a file and is not limited to only functions, enums etc.. Plus marks have be
set manually by the user.

By setting a mark there is no visible indication of the same. A mark is just
a position in a file which is remembered by VIM. Consider the following code


Figure 7. The marks example

[marks]

Suppose you are editing the line x++; and you want to come back to that line
after editing some other line. You can set a mark on that line with the
keystroke m' and come back to the same line later by hitting ''.

VIM allows you to set more than one mark. These marks are stored in registers
a-z, A-Z and 1-0. To set a mark and store the same in a register say j, all
one has to hit is mj. To go back to the mark one has to hit 'j.

Multiple marks are really useful in going back and fro within a piece of
code. Taking the same example, one might want one mark at x++; and another at
y=x; and jump between them or to any other place and then jump back.

Marks can span across files. To use such marks one has to use upper-case
registers i.e. A-Z. Lower-case registers are used only within files and do
not span files. That's to say, if you were to set a mark in a file foo.c in
register "a" and then move to another file and hit 'a, the cursor will not
jump back to the previous location. If you want a mark which will take you to
a different file then you will need to use an upper-case register. For
example, use mA instead of ma. I'll talk about editing multiple files in a
later section.
-----------------------------------------------------------------------------

3.3. gd keystroke

Consider the following piece of code.


Figure 8. The third example

[gd]

For some reason you've forgotten what y and z are and want to go to their
declaration double quick. One way of doing this is by searching backwards for
y or z. VIM offers a simpler and quicker solution. The gd keystroke stands
for Goto Declaration. With the cursor on "y" if you hit gd the cursor will
take you to the declaration :- struct Y y;.

A similar keystroke is gD. This takes you to the global declaration of the
variable under the cursor. So if one want to go to the declaration of x, then
all one needs to do is hit gD and the cursor will move to the declaration of
x.

Recommended Links

New:

Top:

Other:


See also

Bookshelf


FAQs


References

K&R


Standard Library


Code Examples / Snippets


MS Visual C

The best way to buy is to by one of the books with CD This way you can get Visual C++ v.6.0 Teaching Edition for less than $50


Turbo C/Borland C

The cheapest way to buy compiler is to buy one of the books with compiler on CD


EiC

EiC -- EiC is a freely available C language interpreter in both source and binary form. EiC allows you to write C programs, and then "execute" them as if they were a script (like a Perl script or a shell script). You can even embed EiC in your own programs, allowing your application to have a "scripting" language that is syntactically equivalent to C. It is also possible to let an EiC "script" call compiled library code and for compiled code to make callbacks to EiC user defined functions.

[ Jul 8, 2000] Linux Magazine: EiC: I Can C Clearly NOW

"Edmond Breen's EiC (Embeddable/Extensible Interactive C) is an open source program that provides one of the most complete and well-designed language interpreters we've ever seen."


Optimization of code


C critique

Programming Language Critiques -- collection of papers. Not maintained

The Case Against C, P. J. Moylam, Technical Report EE9240, Department of Electrical and Computer Engineering, The University of Newcastle, July 1992. (see also Moylan The case against C.). The author raise some interesting points, but generally he does not understand that the flexibility and quality of implementation are an integral part of the quality of the language and that efficiency still matters in libraries and other products. Pascal was nicw inroductory programming language, but really horrible system programming language because of this fundamentalist typing. Modula is an improvement, but still it might be that Wirth was barking to the wrong tree ;-). His critique of pointers is simply naive (Goto of data structures ;-)

It is not my intention in this note to debate the relative merits of procedural (e.g. Pascal or C), functional (e.g. Lisp), and declarative languages (e.g. Prolog). That is a separate issue. My intention, rather, is to urge people using a procedural language to give preference to high-level languages.

In what follows, I shall be using Modula-2 as an example of a modern programming language. This is simply because it is a language about which I can talk intelligently. I am not suggesting that Modula-2 is perfect; but it at least serves to illustrate that there are languages which do not have the failings of C.

Why C remains popular

With advances in compiler technology, the original motivation for designing medium-level languages - namely, object code efficiency - has largely disappeared. Most other machine-oriented languages which appeared at about the same time as C are now considered to be obsolete. Why, then, has C survived?

There is of course a belief that C is more appealing to the "macho" side of programmers, who enjoy the challenge of struggling with obscure bugs and of finding obscure and tricky ways of doing things.

The conciseness of C code is also a popular feature. C programmers seem to feel that being able to write a statement like


**p++^=q++=*r---s 

is a major argument in favour of using C, since it saves keystrokes. A cynic might suggest that the saving will be offset by the need for additional comments, but a glance at some typical C programs will show that comments are also considered to be a waste of keystrokes, even among so-called professional programmers.

... ... ...

Another important factor is that initial program development is perceived to be faster in C than in a more structured language. (I don't agree with this belief, and will return later to this point.) The general perception is that a lot of forward planning is necessary in a language like Modula-2, whereas with C one can sit down and start coding immediately, giving more immediate gratification.

Do these reasons look familiar? Yes, they are almost identical to the arguments which were being trotted out a few years ago in favour of BASIC. Could it be that the current crop of C programmers are the same people who were playing with toy computers as adolescents? We said at the time that using BASIC as a first language would create bad habits which would be very difficult to eradicate. Now we're seeing the evidence of that.

Nothing in this document should be interpreted as a criticism of the original designers of C. I happen to believe that the language was an excellent invention for its time. I am simply suggesting that there have been some advances in the art and science of software design since that time, and that we ought to be taking advantage of them.

I am not so naive as to expect that diatribes such as this will cause the language to die out. Loyalty to a language is very largely an emotional issue which is not subject to rational debate. I would hope, however, that I can convince at least some people to re-think their positions.

I recognise, too, that factors other than the inherent quality of a language can be important. Compiler availability is one such factor. Re-use of existing software is another; it can dictate the continued use of a language even when it is clearly not the best choice on other grounds. (Indeed, I continue to use the language myself for some projects, mainly for this reason.) What we need to guard against, however, is making inappropriate choices through simple inertia.


Unix Programming

Random Findings



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: April 23, 2019