May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix


News C language Best C books Recommended Links Debugging Make
Richard Stallman and War of Software Clones The Short History of GCC development

Intel Composer XE 

History Humor Etc

GCC was and still is the main programming achievement of RMS. I think that the period when RMS wrote gcc by reusing Pastel compiler developed at the Lawrence Livermore Lab constitutes the most productive part of RMS' programmer career, much more important then his Emacs efforts, although he is still fanatically attached to Emacs.

He stared writing the compiler being almost thirty years old. That means at the time when his programming abilities started to decline. Therefore gcc can be considered to be his "swan song" as a programmer.  The prototype that he used was Pastel compiler developed at the Lawrence Livermore Lab. He is his version of the history GCC creation:

Shortly before beginning the GNU project, I heard about the Free University Compiler Kit, also known as VUCK. (The Dutch word for "free" is written with a V.) This was a compiler designed to handle multiple languages, including C and Pascal, and to support multiple target machines. I wrote to its author asking if GNU could use it.

He responded derisively, stating that the university was free but the compiler was not. I therefore decided that my first program for the GNU project would be a multi-language, multi-platform compiler.

Hoping to avoid the need to write the whole compiler myself, I obtained the source code for the Pastel compiler, which was a multi-platform compiler developed at Lawrence Livermore Lab. It supported, and was written in, an extended version of Pascal, designed to be a system-programming language. I added a C front end, and began porting it to the Motorola 68000 computer. But I had to give that up when I discovered that the compiler needed many megabytes of stack space, and the available 68000 Unix system would only allow 64k.

I then realized that the Pastel compiler functioned by parsing the entire input file into a syntax tree, converting the whole syntax tree into a chain of "instructions", and then generating the whole output file, without ever freeing any storage. At this point, I concluded I would have to write a new compiler from scratch. That new compiler is now known as GCC; none of the Pastel compiler is used in it, but I managed to adapt and use the C front end that I had written.

Of course there were collaborators, because RMS seems to be far from the cutting edge of compiler technology and never wrote any significant paper about compiler construction. But still he was the major driving force behind the project and the success of this project was achieved to large extent due to his personal efforts.

From the political standpoint it was a very bright idea: a person who controls the compiler has enormous influence on everybody who use it. Although at the beginning it was just C-compiler, eventually "after-Stallman" GCC (GNU-cc) became one of the most flexible, most powerful and most portable C ,C++, FORTRAN and (now) even Java compiler. Together with its libraries, it constitutes the powerful development platform that gives the possibility to write code which is still portable to almost all computer platforms you can imagine, from handhelds to supercomputers.

The first version of GCC seems to be finished around 1985 when RMS was 32 year old. Here is the first mention of gcc in GNU manifesto:

So far we have an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, a linker, and around 35 utilities. A shell (command interpreter) is nearly completed. A new portable optimizing C compiler has compiled itself and may be released this year. An initial kernel exists but many more features are needed to emulate Unix. When the kernel and compiler are finished, it will be possible to distribute a GNU system suitable for program development. We will use TeX as our text formatter, but an nroff is being worked on. We will use the free, portable X window system as well. After this we will add a portable Common Lisp, an Empire game, a spreadsheet, and hundreds of other things, plus on-line documentation. We hope to supply, eventually, everything useful that normally comes with a Unix system, and more.

GNU will be able to run Unix programs, but will not be identical to Unix. We will make all improvements that are convenient, based on our experience with other operating systems. In particular, we plan to have longer file names, file version numbers, a crashproof file system, file name completion perhaps, terminal-independent display support, and perhaps eventually a Lisp-based window system through which several Lisp programs and ordinary Unix programs can share a screen. Both C and Lisp will be available as system programming languages. We will try to support UUCP, MIT Chaosnet, and Internet protocols for communication.

Compilers take a long time to mature. The first more or less stable release seems to be 1.17 (January 9, 1988) That was pure luck as in 1989 the Emacs/Xemacs split started that consumed all his energy.  RMS used a kind of Microsoft "embrace and extend" policy in GCC development: extensions to the C-language were enabled by default. 

It looks like RMS personally participated in the development of the compiler at least till the GCC/egcs split (i.e 1997).  Being a former compiler writer myself, I can attest that it is pretty physically challenging to run such a project for more then ten years, even if you are mostly a manager.  

Development was not without problems with Cygnus emerging as an alternative force. Cygnus, the first commercial company devoted to provide commercial support for GNU software and especially GCC complier was co-founded by Michael Tiemann in 1989 and Tieman became the major driving force behind GCC since then. 

In 1997 RMS was so tired of his marathon that he just wanted the compiler to be stable because it was the most important publicity force for FSF. But with GNU license you cannot stop: it enforces the law of jungle and the strongest take it all. Interest of other more flesh and motivated developers (as well as RMS personal qualities first demonstrated in Emacs/Emacs saga) led to a painful fork: 

Subject: A new compiler project to merge the existing GCC forks

A bunch of us (including Fortran, Linux, Intel and RTEMS hackers) have
decided to start a more experimental development project, just like
Cygnus and the FSF started the gcc2 project about 6 years ago.  Only
this time the net community with which we are working is larger!  We
are calling this project 'egcs' (pronounced 'eggs').

Why are we doing this?  It's become increasingly clear in the course
of hacking events that the FSF's needs for gcc2 are at odds with the
objectives of many in the community who have done lots of hacking and
improvement over the years.  GCC is part of the FSF's publicity for the
GNU project, as well as being the GNU system's compiler, so stability
is paramount for them.  On the other hand, Cygnus, the Linux folks,
the pgcc folks, the Fortran folks and many others have done
development work which has not yet gone into the GCC2 tree despite
years of efforts to make it possible.

This situation has resulted in a lot of strong words on the gcc2
mailing list which really is a shame since at the heart we all want
the same thing: the continued success of gcc, the FSF, and Free
Software in general.  Apart from ill will, this is leading to great
divergence which is increasingly making it harder for us all to work
together -- It is almost as if we each had a proprietary compiler!
Thus we are merging our efforts, building something that won't damage
the stability of gcc2, so that we can have the best of both worlds.

As you can see from the list below, we represent a diverse collection
of streams of GCC development.  These forks are painful and waste
time; we are bringing our efforts together to simplify the development
of new features.  We expect that the gcc2 and egcs communities will
continue to overlap to a great extent, since they're both working on
GCC and both working on Free Software.  All code will continue to be
assigned to the FSF exactly as before and will be passed on to the
gcc2 maintainers for ultimate inclusion into the gcc2 tree.

Because the two projects have different objectives, there will be
different sets of maintainers.  Provisionally we have agreed that Jim
Wilson is to act as the egcs maintainer and Jason Merrill as the
maintainer of the egcs C++ front end.  Craig Burley will continue to
maintain the Fortran front end code in both efforts.

What new features will be coming up soon?  There is such a backlog of
tested, un-merged-in features that we have been able to pick a useful
initial set:

    New alias analysis support from John F. Carr.
    g77 (with some performance patches).
    A C++ repository for G++.
    A new instruction scheduler from IBM Haifa.
    A regmove pass (2-address machine optimizations that in future
                    will help with compilation for the x86 and for now
                    will help with some RISC machines).

This will use the development snapshot of 3 August 97 as its base --
in other words we're not starting from the 18 month old gcc-2.7
release, but from a recent development snapshot with all the last 18
months' improvements, including major work on G++.

We plan an initial release for the end of August.  The second release
will include some subset of the following:
  global cse and partial redundancy elimination.
  live range splitting.
  More features of IBM Haifa's instruction scheduling,
      including software pipelineing, and branch scheduling.
  sibling call opts.
  various new embedded targets.
  Further work on regmove.
The egcs mailing list at will be used to discuss and
prioritize these features.

How to join: send mail to egcs-request at  That list is
under majordomo.

We have a web page that describes the various mailing lists and has
this information at:

Alternatively, look for these releases as they spread through other
projects such as RTEMS, Linux, etc.

Come join us!
  David Henkel-Wallace
 (for the egcs members, who currently include, among others):
      Per Bothner
      Joe Buck
      Craig Burley
      John F. Carr
      Stan Cox
      David Edelsohn
      Kaveh R. Ghazi
      Richard Henderson
      David Henkel-Wallace
      Gordon Irlam
      Jakub Jelinek
      Kim Knuttila
      Gavin Koch
      Jeff Law
      Marc Lehmann
      H.J. Lu
      Jason Merrill
      Michael Meissner
      David S. Miller
      Toon Moene
      Jason Molenda
      Andreas Schwab
      Joel Sherrill
      Ian Lance Taylor
      Jim Wilson

After version 2.8.1, GCC development split into FSF GCC on the one hand, and Cygnus EGCS on the other. The first EGCS  version (1.0.0) was released by Cygnus on December 3, 1997 and that instantly put FSF version on the back burner:

March 15, 1999
egcs-1.1.2 is released.
March 10, 1999
Cygnus donates improved global constant propagation and lazy code motion optimizer framework.
March 7, 1999
The egcs project now has additional online documentation.
February 26, 1999
Richard Henderson of Cygnus Solutions has donated a major rewrite of the control flow analysis pass of the compiler.
February 25, 1999
Marc Espie has donated support for OpenBSD on the Alpha, SPARC, x86, and m68k platforms. Additional targets are expected in the future.
January 21, 1999
Cygnus donates support for the PowerPC 750 processor. The PPC750 is a 32bit superscalar implementation of the PowerPC family manufactured by both Motorola and IBM. The PPC750 is targeted at high end Macs as well as high end embedded applications.
January 18, 1999
Christian Bruel and Jeff Law donate improved local dead store elimination.
January 14, 1999
Cygnus donates support for Hypersparc (SS20) and Sparclite86x (embedded) processors.
December 7, 1998
Cygnus donates support for demangling of HP aCC symbols.
December 4, 1998
egcs-1.1.1 is released.
November 26, 1998
A database with test results is now available online, thanks to Marc Lehmann.
November 23, 1998
egcs now can dump flow graph information usable for graphical representation. Contributed by Ulrich Drepper.
November 21, 1998
Cygnus donates support for the SH4 processor.
November 10, 1998
An official steering committee has been formed. Here is the original announcement.
November 5, 1998
The third snapshot of the rewritten libstdc++ is available. You can read some more on
October 27, 1998
Bernd Schmidt donates localized spilling support.
September 22, 1998
IBM Corporation delivers an update to the IBM Haifa instruction scheduler and new software pipelining and branch optimization support.
September 18, 1998
Michael Hayes donates c4x port.
September 6, 1998
Cygnus donates Java front end.
September 3, 1998
egcs-1.1 is released.
August 29, 1998
Cygnus donates Chill front end and runtime.
August 25, 1998
David Miller donates rewritten sparc backend.
August 19, 1998
Mark Mitchell donates load hoisting and store sinking support.
July 15, 1998
The first snapshot of the rewritten libstdc++ is available. You can read some more here.
June 29, 1998
Mark Mitchell donates alias analysis framework.
May 26, 1998
We have added two new mailing lists for the egcs project. gcc-cvs and egcs-patches.

When a patch is checked into the CVS repository, a check-in notification message is automatically sent to the gcc-cvs mailing list. This will allow developers to monitor changes as they are made.

Patch submissions should be sent to egcs-patches instead of the main egcs list. This is primarily to help ensure that patch submissions do not get lost in the large volume of the main mailing list.

May 18, 1998
Cygnus donates gcse optimization pass.
May 15, 1998
egcs-1.0.3 released!.
March 18, 1998
egcs-1.0.2 released!.
February 26, 1998
The egcs web pages are now supported by egcs project hardware and are searchable with webglimpse. The CVS sources are browsable with the free cvsweb package.
February 7, 1998
Stanford has volunteered to host a high speed mirror for egcs. This should significantly improve download speeds for releases and snapshots. Thanks Stanford and Tobin Brockett for the use of their network, disks and computing facilities!
January 12, 1998
Remote access to CVS sources is available!.
January 6, 1998
egcs-1.0.1 released!.
December 3, 1997
egcs-1.0 released!.
August 15, 1997
The egcs project is announced publicly and the first snapshot is put on-line.

The egcs mailing list archives for details. I've also heard assertions that the only reason gcc-2.8 was released as quickly as it was is because of the pressures of the egcs release. Here is a Slashdot discussion that contains some additional info. After the fork egcs team proved to be definitely stronger and the development of the original branch stagnated.

This was pretty painful fork, especially personally for RMS, and consequences are still felt today. For example Linus Torvalds still prefer old GCC version and recompilation of the kernel with newer version lead to some subtle bugs due to not full standard compatibility of the old GCC compiler. Alan Cox has said for years that 2.0.x kernels is to be compiled by gcc, not egcs.

As FSF GCC died a silent death from malnutrition, both were (formally) reunited as of version 2.95 in April 1999.  With a simple renaming trick, egcs became gcc now and formally the split was over:

Re: egcs to take over gcc maintenance

[email protected] said:
> I'm pleased to announce that the egcs team is taking over as the collective GCC maintainer of GCC. This means that the egcs > steering committee is changing its name to the gcc steering committee and future gcc releases will be made by the egcs

> (then gcc) team. This also means that the open development style is also carried over to gcc  (a good thing).

That's a great piece of news...
Congratulations !!!

 Theodore Papadopoulo
 Email: [email protected] Tel: (33) 04 92 38 76 01

More information about the event can be found in the following Slashdot post:

yes, it's true; egcs is gcc. Some details (Score:4)
by JoeBuck (7947) on Tuesday April 20, @12:22PM (#1925069)
As a member of the egcs steering committee, which will become the gcc steering commitee, I can confirm that yes, the merger is official ... sometime in the near future there will be a gcc 3.0 from the egcs code base. The steering committee has been talking to RMS about doing this for months now; at times it's been contentious but now that we understand each other better, things are going much better.

The important thing to understand is that when we started egcs, this is what we were planning all along (well, OK, what some of us were planning). We wanted to change the way gcc worked, not just create a variant. That's why assignments always went to the FSF, why GNU coding style is rigorously followed.

Technically, egcs/gcc will run the same way as before. Since we are now fully GNU, we'll be making some minor changes to reflect that, but we've been doing them gradually in the past few months anyway so nothing that significant will change. Jeff Law remains the release manager; a number of other people have CVS write access; the steering committee handles the "political" and other nontechnical stuff and "hires" the release manager.

egcs/gcc is at this point considerably more bazaar-like than the Linux kernel in that many more people have the ability to get something into the official code (for Linux, only Linus can do that). Jeff Law decides what goes in the release, but he delegates major areas to other maintainers.

The reason for the delay in the announcement is that we were waiting for RMS to announce it (he sent a message to the gnu.*.announce lists), but someone cracked an important FSF machine and did an rm -rf / command. It was noticed and someone powered off the machine, but it appears that this machine hosted the GNU mailing lists, if I understand correctly, so there's nothing on gnu.announce. I don't know why there's still nothing on (which was not cracked). Why do people do things like this?

Currently GCC Release Manager is Mark Mitchell,  CodeSourcery's President and Chief Technical Officer. He received a MS in Computer Science from Stanford in 1999 and a BA from Harvard in 1994. His research interests centered around computational complexity and computer security. Mark worked at CenterLine Software as a software engineer before co-founding CodeSourcery.  In his recent interview he provided some interesting facts about current problems and perspectives of GCC development as well as the reasons of growing independence of the product from FSF (for example a pretty interesting fact that version 2.96 of GCC was not even FSF version at all): 

JB: There has been a problem with so called gcc-2.96. Why did several distributors create this version?

It's important for everyone to know that there was no version of GCC 2.96 from the FSF. I know Red Hat distributed a version that it called 2.96, and other companies may have done that too. I only know about the Red Hat version.

It is too bad that this version was released. It is essentially a development snapshot of the GCC tree, with a lot of Red Hat fixes. There are a lot of bugs in it, relative to either 2.95 or 3.0, and the C++ code generated is incompatible (at the binary level) with either 2.95 or 3.0. It's been very confusing to users, especially because the error messages generated by the 2.96 release refer users back to the FSF GCC bug-reporting address, even though a lot of the bugs in that release don't exist in the FSF releases. The saddest part is that a lot of people at Red Hat knew that using that release was a bad idea, and they couldn't convince their management of that fact.

Partly, this release is our fault, as GCC maintainers. There was a lot of frustruation because it took so long to produce a new GCC release. I'm currently leading an effort to reduce the time between GCC releases so that this kind of thing is less likely to happen again. I can understand why a company might need to put out an intermediate release of GCC if we are not able to do it ourselves. That's why I think it's important for people to support independent development of GCC, which is, of course, what CodeSourcery does. We're not affiliated with any of the distributors, and so we can act to try to improve the FSF version of GCC directly. When people financially support our work on the releases, that helps to make sure that there are frequent enough release to avoid these problems.

JB: Do you feel, that the 2.96 release speeded up the development and allowed gcc-3.0 to be ready faster?

That is a very difficult question to answer. On the one hand, Red Hat certainly fixed some bugs in Red Hat's 2.96 version, and some of those improvements were contributed back for GCC 3.0. (I do not know if all of them were contributed back, or not.) On the other hand, GCC developers at Red Hat must have spent a lot of time on testing and improving their 2.96 version, and therefore that time was not spent on GCC 3.0.

The problem is that for a company like Red Hat (or CodeSourcery) you can't choose between helping out with the FSF release of GCC and doing something else based just on what would be good for the FSF release. You have to try to make the best business decision, which might mean that you have to do something to please a customer, even though it doesn't help the FSF release.

If people would like to keep companies from making their own releases, there are two things to do: a) make that sentiment known to the companies, since companies like to please their customers, and b) hire companies like CodeSourcery to help work on the FSF releases.

JB: How many developers are currently working on GCC?

It's impossible to count. Hundreds, probably -- but there is definitely a group of ten or twenty that is responsible for most of the changes.

... ... ...

JB: Do you see compiling java to native code as a drawback when using free (speech) code? (compared to using p-code only)

I'm not so moralistic about these issues as some people. I think it's good that we support compiling from byte-code because that's how lots of Java code is distributed. Whether or not that code is free, we're providing a useful product. I suspect the FSF has a different viewpoint.

JB: What are future plans in gcc development?

I think the number one issue is the performance of the generated code. People are looking into different ways to optimize better. I'd also like to see a more robust compiler that issues better error messages and never crashes on incorrect input.

JB: Often, the major problem with hardware vendors is that they don't want to provide the technical documentation for their hardware (forcing you to use their or third party proprietary code). Is this also true with processor documentation?

Most popular workstation processors are well-documented from the point of view of their instruction set. There often isn't as much information available about timing and scheduling information. And some embedded vendors never make any information about their chips available, which means that they can't really distribute a version of GCC for their chips because the GCC source code would give away information about the chip.

AMD is a great example of a company trying to work closely with GCC up front. They made a lot of information about their new chip available very early in the process.

JB: Which systems do currently use GCC as their primary compiler set (not counting *BSD and GNU/Linux)?

Apple's OS X. If Apple succeeds, there will probably be more OS X developers using GCC than there are GNU/Linux developers.

... ... ...

Having RMS as a member of gcc steering committee(SC) has its problems and still invites forking ;-). As one of the participants of the Slashdot discussion noted:

Re:Speaking as a GCC maintainer, I call bullshit (Score:3, Informative)
by devphil (51341) on Sunday August 15, @02:16PM (#9974914)

You're not completely right, and not completely wrong. The politics are exceedingly complicated, and I regret it every time I learn more about them.

RMS doesn't have dictatorial power over the SC, nor a formal veto vote.

He does hold the copyright to GCC. (Well, the FSF holds the copyright, but he is the FSF.) That's a lot more important that many people realize.

Choice of implementation language is, strictly speaking, a purely technical issue. But it has so many consequences that it gets special attention.

The SC specifically avoids getting involved in technical issues whenever possible. Even when the SC is asked to decide something, they never go to RMS when they can help it, because he's so unaware of modern real-world technical issues and the bigger picture. It's far, far better to continue postponing a question than to ask it, when RMS is involved, because he will make a snap decision based on his own bizarre technical ideas, and then never change his mind in time for the new decision to be worth anything.

He can be convinced. Eventually. It took the SC over a year to explain and demonstrate that Java bytecode could not easily be used to subvert the GPL, therefore permitting GCJ to be checked in to the official repository was okay. I'm sure that someday we'll be using C++ in core code. Just not anytime soon.

As for forking again... well, yeah, I personally happen to be a proponent of that path. But I'm keenly aware of the damange that would to do GCC's reputation -- beyond the short-sighted typical /. viewpoint of "always disobey every authority" -- and I'm still probably underestimating the problems.

Some additional information about gcc development can be found at History - GCC Wiki

About history of gcc development see: The Short History of GCC development The first version of gcc was released in March 1987:

Date: Sun, 22 Mar 87 10:56:56 EST
From: rms (Richard M. Stallman)

The GNU C compiler is now available for ftp from the file
/u2/emacs/gcc.tar on This includes machine
descriptions for vax and sun, 60 pages of documentation on writing
machine descriptions (internals.texinfo, internals.dvi and Info
file internals).

This also contains the ANSI standard (Nov 86) C preprocessor and 30
pages of reference manual for it.

This compiler compiles itself correctly on the 68020 and did so
recently on the vax. It recently compiled Emacs correctly on the
68020, and has also compiled tex-in-C and Kyoto Common Lisp.
However, it probably still has numerous bugs that I hope you will
find for me.

I will be away for a month, so bugs reported now will not be
handled until then.

If you can't ftp, you can order a compiler beta-test tape from the
Free Software Foundation for $150 (plus 5% sales tax in
Massachusetts, or plus $15 overseas if you want air mail).

Free Software Foundation
1000 Mass Ave
Cambridge, MA 02138 has packages for gcc-3.4.2 and gcc-3.3.2 for Solaris 9 and package for gcc-3.3.2 for Solaris 10. Usually one needs only gcc_small package has ONLY the C and C++ compilers and is a much smaller download but does not provide that.

If you use gcc it's very convenient to use Midnight Commander on Solaris as a command line pseudo IDE.

Please note that Sun Studio 11 is free both for Solaris and Linux and might be a better option for compilation on UltraSparc then GCC (10% or more faster code).

Top Visited
Past week
Past month


Old News ;-)

[Jul 01, 2020] How to handle dynamic and static libraries in Linux by Stephan Avenwedde

Jun 17, 2020 |
Knowing how Linux uses libraries, including the difference between static and dynamic linking, can help you fix dependency problems. Feed 27 up Image by : Internet Archive Book Images. Modified by CC BY-SA 4.0 x Subscribe now

Get the highlights in your inbox every week.

Linux, in a way, is a series of static and dynamic libraries that depend on each other. For new users of Linux-based systems, the whole handling of libraries can be a mystery. But with experience, the massive amount of shared code built into the operating system can be an advantage when writing new applications.

To help you get in touch with this topic, I prepared a small application example that shows the most common methods that work on common Linux distributions (these have not been tested on other systems). To follow along with this hands-on tutorial using the example application, open a command prompt and type:

$ git clone https: // / hANSIc99 / library_sample
$ cd library_sample /
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.o
$ make clean
rm * .o

After executing these commands, these files should be added to the directory (run ls to see them):

libmy_static.a About static linking

When your application links against a static library, the library's code becomes part of the resulting executable. This is performed only once at linking time, and these static libraries usually end with a .a extension.

A static library is an archive ( ar ) of object files. The object files are usually in the ELF format. ELF is short for Executable and Linkable Format , which is compatible with many operating systems.

The output of the file command tells you that the static library libmy_static.a is the ar archive type:

$ file libmy_static.a
libmy_static.a: current ar archive

With ar -t , you can look into this archive; it shows two object files:

$ ar -t libmy_static.a

You can extract the archive's files with ar -x <archive-file> . The extracted files are object files in ELF format:

$ ar -x libmy_static.a
$ file libmy_static_a.o
libmy_static_a.o: ELF 64 -bit LSB relocatable, x86- 64 , version 1 ( SYSV ) , not stripped About dynamic linking More Linux resources Dynamic linking means the use of shared libraries. Shared libraries usually end with .so (short for "shared object").

Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.

Another thing to note is that when a bug is fixed in a shared library, every application that references this library will profit from it. This also means that if the bug remains undetected, each referencing application will suffer from it (if the application uses the affected parts).

It can be very hard for beginners when an application requires a specific version of the library, but the linker only knows the location of an incompatible version. In this case, you must help the linker find the path to the correct version.

Although this is not an everyday issue, understanding dynamic linking will surely help you in fixing such problems.

Fortunately, the mechanics for this are quite straightforward.

To detect which libraries are required for an application to start, you can use ldd , which will print out the shared libraries used by a given file:

$ ldd my_app ( 0x00007ffd1299c000 ) = > not found = > / lib64 / ( 0x00007f56b869b000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f56b8881000 )

Note that the library is part of the repository but is not found. This is because the dynamic linker, which is responsible for loading all dependencies into memory before executing the application, cannot find this library in the standard locations it searches.

Errors associated with linkers finding incompatible versions of common libraries (like bzip2 , for example) can be quite confusing for a new user. One way around this is to add the repository folder to the environment variable LD_LIBRARY_PATH to tell the linker where to look for the correct version. In this case, the right version is in this folder, so you can export it:


Now the dynamic linker knows where to find the library, and the application can be executed. You can rerun ldd to invoke the dynamic linker, which inspects the application's dependencies and loads them into memory. The memory address is shown after the object path:

$ ldd my_app ( 0x00007ffd385f7000 ) = > / home / stephan / library_sample / ( 0x00007f3fad401000 ) = > / lib64 / ( 0x00007f3fad21d000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f3fad408000 )

To find out which linker is invoked, you can use file :

$ file my_app
my_app: ELF 64 -bit LSB executable, x86- 64 , version 1 ( SYSV ) , dynamically linked, interpreter / lib64 / ld-linux-x86- 64 .so.2, BuildID [ sha1 ] =26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU / Linux 3.2.0, not stripped

The linker /lib64/ld-linux-x86– is a symbolic link to , which is the default linker for my Linux distribution:

$ file / lib64 / ld-linux-x86- 64 .so.2
/ lib64 / ld-linux-x86- 64 .so.2: symbolic link to ld- 2.31 .so

Looking back to the output of ldd , you can also see (next to ) that each dependency ends with a number (e.g., /lib64/ ). The usual naming scheme of shared objects is:

**lib** **.<MAJOR>** . **<MINOR>**

On my system, is also a symbolic link to the shared object in the same folder:

$ file / lib64 /
/ lib64 / symbolic link to libc- 2.31 .so

If you are facing the issue that an application will not start because the loaded library has the wrong version, it is very likely that you can fix this issue by inspecting and rearranging the symbolic links or specifying the correct search path (see "The dynamic loader:" below).

For more information, look on the ldd man page .

Dynamic loading

Dynamic loading means that a library (e.g., a .so file) is loaded during a program's runtime. This is done using a certain programming scheme.

Dynamic loading is applied when an application uses plugins that can be modified during runtime.

See the dlopen man page for more information.

The dynamic loader:

On Linux, you mostly are dealing with shared objects, so there must be a mechanism that detects an application's dependencies and loads them into memory. looks for shared objects in these places in the following order:

  1. The relative or absolute path in the application (hardcoded with the -rpath compiler option on GCC)
  2. In the environment variable LD_LIBRARY_PATH
  3. In the file /etc/

Keep in mind that adding a library to the systems library archive /usr/lib64 requires administrator privileges. You could copy manually to the library archive and make the application work without setting LD_LIBRARY_PATH :

sudo cp / usr / lib64 /

When you run ldd , you can see the path to the library archive shows up now:

$ ldd my_app ( 0x00007ffe82fab000 ) = > / lib64 / ( 0x00007f0a963e0000 ) = > / lib64 / ( 0x00007f0a96216000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f0a96401000 ) Customize the shared library at compile time

If you want your application to use your shared libraries, you can specify an absolute or relative path during compile time.

Modify the makefile (line 10) and recompile the program by invoking make -B . Then, the output of ldd shows is listed with its absolute path.

Change this:

CFLAGS =-Wall -Werror -Wl,-rpath,$(shell pwd)

To this (be sure to edit the username):

CFLAGS =/home/stephan/library_sample/

Then recompile:

$ make

Confirm it is using the absolute path you set, which you can see on line 2 of the output:

$ ldd my_app ( 0x00007ffe143ed000 ) = > / lib64 / ( 0x00007fe50926d000 )
/ home / stephan / library_sample / ( 0x00007fe509268000 ) = > / lib64 / ( 0x00007fe50909e000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007fe50928e000 )

This is a good example, but how would this work if you were making a library for others to use? New library locations can be registered by writing them to /etc/ or creating a <library-name>.conf file containing the location under /etc/ . Afterward, ldconfig must be executed to rewrite the file. This step is sometimes necessary after you install a program that brings some special shared libraries with it.

See the man page for more information.

How to handle multiple architectures

Usually, there are different libraries for the 32-bit and 64-bit versions of applications. The following list shows their standard locations for different Linux distributions:

Red Hat family

Debian family

Arch Linux family

FreeBSD (technical not a Linux distribution)

Knowing where to look for these key libraries can make broken library links a problem of the past.

While it may be confusing at first, understanding dependency management in Linux libraries is a way to feel in control of the operating system. Run through these steps with other applications to become familiar with common libraries, and continue to learn how to fix any library challenges that could come up along your way.

[Nov 01, 2017] Compiling in $HOME by Tom Ryder

Sep 04, 2012 |

If you don't have root access on a particular GNU/Linux system that you use, or if you don't want to install anything to the system directories and potentially interfere with others' work on the machine, one option is to build your favourite tools in your $HOME directory.

This can be useful if there's some particular piece of software that you really need for whatever reason, particularly on legacy systems that you share with other users or developers. The process can include not just applications, but libraries as well; you can link against a mix of your own libraries and the system's libraries as you need.


In most cases this is actually quite a straightforward process, as long as you're allowed to use the system's compiler and any relevant build tools such as autoconf . If the ./configure script for your application allows a --prefix option, this is generally a good sign; you can normally test this with --help :

$ mkdir src
$ cd src
$ wget -q
$ tar -xf fooapp-1.2.3.tar.gz
$ cd fooapp-1.2.3
$ pwd
$ ./configure --help | grep -- --prefix
  --prefix=PREFIX    install architecture-independent files in PREFIX

Don't do this if the security policy on your shared machine explicitly disallows compiling programs! However, it's generally quite safe as you never need root privileges at any stage of the process.

Naturally, this is not a one-size-fits-all process; the build process will vary for different applications, but it's a workable general approach to the task.


Configure the application or library with the usual call to ./configure , but use your home directory for the prefix:

$ ./configure --prefix=$HOME

If you want to include headers or link against libraries in your home directory, it may be appropriate to add definitions for CFLAGS and LDFLAGS to refer to those directories:

$ CFLAGS="-I$HOME/include" \
> LDFLAGS="-L$HOME/lib" \
> ./configure --prefix=$HOME

Some configure scripts instead allow you to specify the path to particular libraries. Again, you can generally check this with --help .

$ ./configure --prefix=$HOME --with-foolib=$HOME/lib

You should then be able to install the application with the usual make and make install , needing root privileges for neither:

$ make
$ make install

If successful, this process will insert files into directories like $HOME/bin and $HOME/lib . You can then try to call the application by its full path:

$ $HOME/bin/fooapp -v
fooapp v1.2.3
Environment setup

To make this work smoothly, it's best to add to a couple of environment variables, probably in your .bashrc file, so that you can use the home-built application transparently.

First of all, if you linked the application against libraries also in your home directory, it will be necessary to add the library directory to LD_LIBRARY_PATH , so that the correct libraries are found and loaded at runtime:

$ /home/tom/bin/fooapp -v
/home/tom/bin/fooapp: error while loading shared libraries: cannot open shared...
Could not load library foolib
$ export LD_LIBRARY_PATH=$HOME/lib
$ /home/tom/bin/fooapp -v
fooapp v1.2.3

An obvious one is adding the $HOME/bin directory to your $PATH so that you can call the application without typing its path:

$ fooapp -v
-bash: fooapp: command not found
$ export PATH="$HOME/bin:$PATH"
$ fooapp -v
fooapp v1.2.3

Similarly, defining MANPATH so that calls to man will read the manual for your build of the application first is worthwhile. You may find that $MANPATH is empty by default, so you will need to append other manual locations to it. An easy way to do this is by appending the output of the manpath utility:

$ man -k fooapp
$ manpath
$ export MANPATH="$HOME/share/man:$(manpath)"
$ man -k fooapp
fooapp (1) - Fooapp, the programmer's foo apper

This done, you should be able to use your private build of the software comfortably, and all without never needing to reach for root .


This tends to work best for userspace tools like editors or other interactive command-line apps; it even works for shells. However this is not a typical use case for most applications which expect to be packaged or compiled into /usr/local , so there are no guarantees it will work exactly as expected. I have found that Vim and Tmux work very well like this, even with Tmux linked against a home-compiled instance of libevent , on which it depends.

In particular, if any part of the install process requires root privileges, such as making a setuid binary, then things are likely not to work as expected.

[Oct 31, 2017] Unix as IDE: Debugging by Tom Ryder

Notable quotes:
"... Thanks to user samwyse for the .SUFFIXES suggestion in the comments. ..."
Feb 14, 2012 |

When unexpected behaviour is noticed in a program, GNU/Linux provides a wide variety of command-line tools for diagnosing problems. The use of gdb , the GNU debugger, and related tools like the lesser-known Perl debugger, will be familiar to those using IDEs to set breakpoints in their code and to examine program state as it runs. Other tools of interest are available however to observe in more detail how a program is interacting with a system and using its resources.

Debugging with gdb

You can use gdb in a very similar fashion to the built-in debuggers in modern IDEs like Eclipse and Visual Studio.

If you are debugging a program that you've just compiled, it makes sense to compile it with its debugging symbols added to the binary, which you can do with a gcc call containing the -g option. If you're having problems with some code, it helps to also use -Wall to show any errors you may have otherwise missed:

$ gcc -g -Wall example.c -o example

The classic way to use gdb is as the shell for a running program compiled in C or C++, to allow you to inspect the program's state as it proceeds towards its crash.

$ gdb example
Reading symbols from /home/tom/example...done.

At the (gdb) prompt, you can type run to start the program, and it may provide you with more detailed information about the causes of errors such as segmentation faults, including the source file and line number at which the problem occurred. If you're able to compile the code with debugging symbols as above and inspect its running state like this, it makes figuring out the cause of a particular bug a lot easier.

(gdb) run
Starting program: /home/tom/gdb/example 

Program received signal SIGSEGV, Segmentation fault.
0x000000000040072e in main () at example.c:43
43     printf("%d\n", *segfault);

After an error terminates the program within the (gdb) shell, you can type backtrace to see what the calling function was, which can include the specific parameters passed that may have something to do with what caused the crash.

(gdb) backtrace
#0  0x000000000040072e in main () at example.c:43

You can set breakpoints for gdb using the break to halt the program's run if it reaches a matching line number or function call:

(gdb) break 42
Breakpoint 1 at 0x400722: file example.c, line 42.
(gdb) break malloc
Breakpoint 1 at 0x4004c0
(gdb) run
Starting program: /home/tom/gdb/example 

Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/

Thereafter it's helpful to step through successive lines of code using step . You can repeat this, like any gdb command, by pressing Enter repeatedly to step through lines one at a time:

(gdb) step
Single stepping until exit from function _start,
which has no line number information.
0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/

You can even attach gdb to a process that is already running, by finding the process ID and passing it to gdb :

$ pgrep example
$ gdb -p 1524

This can be useful for redirecting streams of output for a task that is taking an unexpectedly long time to run.

Debugging with valgrind

The much newer valgrind can be used as a debugging tool in a similar way. There are many different checks and debugging methods this program can run, but one of the most useful is its Memcheck tool, which can be used to detect common memory errors like buffer overflow:

$ valgrind --leak-check=yes ./example
==29557== Memcheck, a memory error detector
==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==29557== Command: ./example
==29557== Invalid read of size 1
==29557==    at 0x40072E: main (example.c:43)
==29557==  Address 0x0 is not stack'd, malloc'd or (recently) free'd

The gdb and valgrind tools can be used together for a very thorough survey of a program's run. Zed Shaw's Learn C the Hard Way includes a really good introduction for elementary use of valgrind with a deliberately broken program.

Tracing system and library calls with ltrace

The strace and ltrace tools are designed to allow watching system calls and library calls respectively for running programs, and logging them to the screen or, more usefully, to files.

You can run ltrace and have it run the program you want to monitor in this way for you by simply providing it as the sole parameter. It will then give you a listing of all the system and library calls it makes until it exits.

$ ltrace ./example
__libc_start_main(0x4006ad, 1, 0x7fff9d7e5838, 0x400770, 0x400760 
srand(4, 0x7fff9d7e5838, 0x7fff9d7e5848, 0, 0x7ff3aebde320) = 0
malloc(24)                                                  = 0x01070010
rand(0, 0x1070020, 0, 0x1070000, 0x7ff3aebdee60)            = 0x754e7ddd
malloc(24)                                                  = 0x01070030
rand(0x7ff3aebdee60, 24, 0, 0x1070020, 0x7ff3aebdeec8)      = 0x11265233
malloc(24)                                                  = 0x01070050
rand(0x7ff3aebdee60, 24, 0, 0x1070040, 0x7ff3aebdeec8)      = 0x18799942
malloc(24)                                                  = 0x01070070
rand(0x7ff3aebdee60, 24, 0, 0x1070060, 0x7ff3aebdeec8)      = 0x214a541e
malloc(24)                                                  = 0x01070090
rand(0x7ff3aebdee60, 24, 0, 0x1070080, 0x7ff3aebdeec8)      = 0x1b6d90f3
malloc(24)                                                  = 0x010700b0
rand(0x7ff3aebdee60, 24, 0, 0x10700a0, 0x7ff3aebdeec8)      = 0x2e19c419
malloc(24)                                                  = 0x010700d0
rand(0x7ff3aebdee60, 24, 0, 0x10700c0, 0x7ff3aebdeec8)      = 0x35bc1a99
malloc(24)                                                  = 0x010700f0
rand(0x7ff3aebdee60, 24, 0, 0x10700e0, 0x7ff3aebdeec8)      = 0x53b8d61b
malloc(24)                                                  = 0x01070110
rand(0x7ff3aebdee60, 24, 0, 0x1070100, 0x7ff3aebdeec8)      = 0x18e0f924
malloc(24)                                                  = 0x01070130
rand(0x7ff3aebdee60, 24, 0, 0x1070120, 0x7ff3aebdeec8)      = 0x27a51979
--- SIGSEGV (Segmentation fault) ---
+++ killed by SIGSEGV +++

You can also attach it to a process that's already running:

$ pgrep example
$ ltrace -p 5138

Generally, there's quite a bit more than a couple of screenfuls of text generated by this, so it's helpful to use the -o option to specify an output file to which to log the calls:

$ ltrace -o example.ltrace ./example

You can then view this trace in a text editor like Vim, which includes syntax highlighting for ltrace output:

Vim session with ltrace output

Vim session with ltrace output

I've found ltrace very useful for debugging problems where I suspect improper linking may be at fault, or the absence of some needed resource in a chroot environment, since among its output it shows you its search for libraries at dynamic linking time and opening configuration files in /etc , and the use of devices like /dev/random or /dev/zero .

Tracking open files with lsof

If you want to view what devices, files, or streams a running process has open, you can do that with lsof :

$ pgrep example
$ lsof -p 5051

For example, the first few lines of the apache2 process running on my home server are:

# lsof -p 30779
apache2 30779 root  cwd    DIR    8,1     4096       2 /
apache2 30779 root  rtd    DIR    8,1     4096       2 /
apache2 30779 root  txt    REG    8,1   485384  990111 /usr/lib/apache2/mpm-prefork/apache2
apache2 30779 root  DEL    REG    8,1          1087891 /lib/x86_64-linux-gnu/
apache2 30779 root  mem    REG    8,1    35216 1079715 /usr/lib/php5/20090626/

Interestingly, another way to list the open files for a process is to check the corresponding entry for the process in the dynamic /proc directory:

# ls -l /proc/30779/fd

This can be very useful in confusing situations with file locks, or identifying whether a process is holding open files that it needn't.

Viewing memory allocation with pmap

As a final debugging tip, you can view the memory allocations for a particular process with pmap :

# pmap 30779 
30779:   /usr/sbin/apache2 -k start
00007fdb3883e000     84K r-x--  /lib/x86_64-linux-gnu/ (deleted)
00007fdb38853000   2048K -----  /lib/x86_64-linux-gnu/ (deleted)
00007fdb38a53000      4K rw---  /lib/x86_64-linux-gnu/ (deleted)
00007fdb38a54000      4K -----    [ anon ]
00007fdb38a55000   8192K rw---    [ anon ]
00007fdb392e5000     28K r-x--  /usr/lib/php5/20090626/
00007fdb392ec000   2048K -----  /usr/lib/php5/20090626/
00007fdb394ec000      4K r----  /usr/lib/php5/20090626/
00007fdb394ed000      4K rw---  /usr/lib/php5/20090626/
total           152520K

This will show you what libraries a running process is using, including those in shared memory. The total given at the bottom is a little misleading as for loaded shared libraries, the running process is not necessarily the only one using the memory; determining "actual" memory usage for a given process is a little more in-depth than it might seem with shared libraries added to the picture. Posted in GNU/Linux Tagged backtrace , breakpoint , debugging , file , file handle , gdb , ltrace , memory , process , strace , trace , unix Unix as IDE: Building Posted on February 13, 2012 by Tom Ryder Because compiling projects can be such a complicated and repetitive process, a good IDE provides a means to abstract, simplify, and even automate software builds. Unix and its descendents accomplish this process with a Makefile , a prescribed recipe in a standard format for generating executable files from source and object files, taking account of changes to only rebuild what's necessary to prevent costly recompilation.

One interesting thing to note about make is that while it's generally used for compiled software build automation and has many shortcuts to that effect, it can actually effectively be used for any situation in which it's required to generate one set of files from another. One possible use is to generate web-friendly optimised graphics from source files for deployment for a website; another use is for generating static HTML pages from code, rather than generating pages on the fly. It's on the basis of this more flexible understanding of software "building" that modern takes on the tool like Ruby's rake have become popular, automating the general tasks for producing and installing code and files of all kinds.

Anatomy of a Makefile

The general pattern of a Makefile is a list of variables and a list of targets , and the sources and/or objects used to provide them. Targets may not necessarily be linked binaries; they could also constitute actions to perform using the generated files, such as install to instate built files into the system, and clean to remove built files from the source tree.

It's this flexibility of targets that enables make to automate any sort of task relevant to assembling a production build of software; not just the typical parsing, preprocessing, compiling proper and linking steps performed by the compiler, but also running tests ( make test ), compiling documentation source files into one or more appropriate formats, or automating deployment of code into production systems, for example, uploading to a website via a git push or similar content-tracking method.

An example Makefile for a simple software project might look something like the below:

all: example

example: main.o example.o library.o
    gcc main.o example.o library.o -o example

main.o: main.c
    gcc -c main.c -o main.o

example.o: example.c
    gcc -c example.c -o example.o

library.o: library.c
    gcc -c library.c -o library.o

    rm *.o example

install: example
    cp example /usr/bin

The above isn't the most optimal Makefile possible for this project, but it provides a means to build and install a linked binary simply by typing make . Each target definition contains a list of the dependencies required for the command that follows; this means that the definitions can appear in any order, and the call to make will call the relevant commands in the appropriate order.

Much of the above is needlessly verbose or repetitive; for example, if an object file is built directly from a single C file of the same name, then we don't need to include the target at all, and make will sort things out for us. Similarly, it would make sense to put some of the more repeated calls into variables so that we would not have to change them individually if our choice of compiler or flags changed. A more concise version might look like the following:

CC = gcc
OBJECTS = main.o example.o library.o
BINARY = example

all: example

example: $(OBJECTS)
    $(CC) $(OBJECTS) -o $(BINARY)

    rm -f $(BINARY) $(OBJECTS)

install: example
    cp $(BINARY) /usr/bin
More general uses of make

In the interests of automation, however, it's instructive to think of this a bit more generally than just code compilation and linking. An example could be for a simple web project involving deploying PHP to a live webserver. This is not normally a task people associate with the use of make , but the principles are the same; with the source in place and ready to go, we have certain targets to meet for the build.

PHP files don't require compilation, of course, but web assets often do. An example that will be familiar to web developers is the generation of scaled and optimised raster images from vector source files, for deployment to the web. You keep and version your original source file, and when it comes time to deploy, you generate a web-friendly version of it.

Let's assume for this particular project that there's a set of four icons used throughout the site, sized to 64 by 64 pixels. We have the source files to hand in SVG vector format, safely tucked away in version control, and now need to generate the smaller bitmaps for the site, ready for deployment. We could therefore define a target icons , set the dependencies, and type out the commands to perform. This is where command line tools in Unix really begin to shine in use with Makefile syntax:

icons: create.png read.png update.png delete.png

create.png: create.svg
    convert create.svg create.raw.png && \
    pngcrush create.raw.png create.png

read.png: read.svg
    convert read.svg read.raw.png && \
    pngcrush read.raw.png read.png

update.png: update.svg
    convert update.svg update.raw.png && \
    pngcrush update.raw.png update.png

delete.png: delete.svg
    convert delete.svg delete.raw.png && \
    pngcrush delete.raw.png delete.png

With the above done, typing make icons will go through each of the source icons files in a Bash loop, convert them from SVG to PNG using ImageMagick's convert , and optimise them with pngcrush , to produce images ready for upload.

A similar approach can be used for generating help files in various forms, for example, generating HTML files from Markdown source:

docs: README.html credits.html

    markdown > README.html

    markdown > credits.html

And perhaps finally deploying a website with git push web , but only after the icons are rasterized and the documents converted:

deploy: icons docs
    git push web

For a more compact and abstract formula for turning a file of one suffix into another, you can use the .SUFFIXES pragma to define these using special symbols. The code for converting icons could look like this; in this case, $< refers to the source file, $* to the filename with no extension, and $@ to the target.

icons: create.png read.png update.png delete.png

.SUFFIXES: .svg .png

    convert $< $*.raw.png && \
    pngcrush $*.raw.png $@
Tools for building a Makefile

A variety of tools exist in the GNU Autotools toolchain for the construction of configure scripts and make files for larger software projects at a higher level, in particular autoconf and automake . The use of these tools allows generating configure scripts and make files covering very large source bases, reducing the necessity of building otherwise extensive makefiles manually, and automating steps taken to ensure the source remains compatible and compilable on a variety of operating systems.

Covering this complex process would be a series of posts in its own right, and is out of scope of this survey.

Thanks to user samwyse for the .SUFFIXES suggestion in the comments. Posted in GNU/Linux Tagged build , clean , dependency , generate , install , make , makefile , target , unix Unix as IDE: Compiling Posted on February 12, 2012 by Tom Ryder There are a lot of tools available for compiling and interpreting code on the Unix platform, and they tend to be used in different ways. However, conceptually many of the steps are the same. Here I'll discuss compiling C code with gcc from the GNU Compiler Collection, and briefly the use of perl as an example of an interpreter. GCC

GCC is a very mature GPL-licensed collection of compilers, perhaps best-known for working with C and C++ programs. Its free software license and near ubiquity on free Unix-like systems like GNU/Linux and BSD has made it enduringly popular for these purposes, though more modern alternatives are available in compilers using the LLVM infrastructure, such as Clang .

The frontend binaries for GNU Compiler Collection are best thought of less as a set of complete compilers in their own right, and more as drivers for a set of discrete programming tools, performing parsing, compiling, and linking, among other steps. This means that while you can use GCC with a relatively simple command line to compile straight from C sources to a working binary, you can also inspect in more detail the steps it takes along the way and tweak it accordingly.

I won't be discussing the use of make files here, though you'll almost certainly be wanting them for any C project of more than one file; that will be discussed in the next article on build automation tools.

Compiling and assembling object code

You can compile object code from a C source file like so:

$ gcc -c example.c -o example.o

Assuming it's a valid C program, this will generate an unlinked binary object file called example.o in the current directory, or tell you the reasons it can't. You can inspect its assembler contents with the objdump tool:

$ objdump -D example.o

Alternatively, you can get gcc to output the appropriate assembly code for the object directly with the -S parameter:

$ gcc -c -S example.c -o example.s

This kind of assembly output can be particularly instructive, or at least interesting, when printed inline with the source code itself, which you can do with:

$ gcc -c -g -Wa,-a,-ad example.c > example.lst

The C preprocessor cpp is generally used to include header files and define macros, among other things. It's a normal part of gcc compilation, but you can view the C code it generates by invoking cpp directly:

$ cpp example.c

This will print out the complete code as it would be compiled, with includes and relevant macros applied.

Linking objects

One or more objects can be linked into appropriate binaries like so:

$ gcc example.o -o example

In this example, GCC is not doing much more than abstracting a call to ld , the GNU linker. The command produces an executable binary called example .

Compiling, assembling, and linking

All of the above can be done in one step with:

$ gcc example.c -o example

This is a little simpler, but compiling objects independently turns out to have some practical performance benefits in not recompiling code unnecessarily, which I'll discuss in the next article.

Including and linking

C files and headers can be explicitly included in a compilation call with the -I parameter:

$ gcc -I/usr/include/somelib.h example.c -o example

Similarly, if the code needs to be dynamically linked against a compiled system library available in common locations like /lib or /usr/lib , such as ncurses , that can be included with the -l parameter:

$ gcc -lncurses example.c -o example

If you have a lot of necessary inclusions and links in your compilation process, it makes sense to put this into environment variables:

$ export CFLAGS=-I/usr/include/somelib.h
$ export CLIBS=-lncurses
$ gcc $CFLAGS $CLIBS example.c -o example

This very common step is another thing that a Makefile is designed to abstract away for you.

Compilation plan

To inspect in more detail what gcc is doing with any call, you can add the -v switch to prompt it to print its compilation plan on the standard error stream:

$ gcc -v -c example.c -o example.o

If you don't want it to actually generate object files or linked binaries, it's sometimes tidier to use -### instead:

$ gcc -### -c example.c -o example.o

This is mostly instructive to see what steps the gcc binary is abstracting away for you, but in specific cases it can be useful to identify steps the compiler is taking that you may not necessarily want it to.

More verbose error checking

You can add the -Wall and/or -pedantic options to the gcc call to prompt it to warn you about things that may not necessarily be errors, but could be:

$ gcc -Wall -pedantic -c example.c -o example.o

This is good for including in your Makefile or in your makeprg definition in Vim, as it works well with the quickfix window discussed in the previous article and will enable you to write more readable, compatible, and less error-prone code as it warns you more extensively about errors.

Profiling compilation time

You can pass the flag -time to gcc to generate output showing how long each step is taking:

$ gcc -time -c example.c -o example.o

You can pass generic optimisation options to gcc to make it attempt to build more efficient object files and linked binaries, at the expense of compilation time. I find -O2 is usually a happy medium for code going into production:

Like any other Bash command, all of this can be called from within Vim by:

:!gcc % -o example

The approach to interpreted code on Unix-like systems is very different. In these examples I'll use Perl, but most of these principles will be applicable to interpreted Python or Ruby code, for example.


You can run a string of Perl code directly into the interpreter in any one of the following ways, in this case printing the single line "Hello, world." to the screen, with a linebreak following. The first one is perhaps the tidiest and most standard way to work with Perl; the second uses a heredoc string, and the third a classic Unix shell pipe.

$ perl -e 'print "Hello world.\n";'
$ perl <<<'print "Hello world.\n";'
$ echo 'print "Hello world.\n";' | perl

Of course, it's more typical to keep the code in a file, which can be run directly:

$ perl

In either case, you can check the syntax of the code without actually running it with the -c switch:

$ perl -c

But to use the script as a logical binary , so you can invoke it directly without knowing or caring what the script is, you can add a special first line to the file called the "shebang" that does some magic to specify the interpreter through which the file should be run.

#!/usr/bin/env perl
print "Hello, world.\n";

The script then needs to be made executable with a chmod call. It's also good practice to rename it to remove the extension, since it is now taking the shape of a logic binary:

$ mv hello{.pl,}
$ chmod +x hello

And can thereafter be invoked directly, as if it were a compiled binary:

$ ./hello

This works so transparently that many of the common utilities on modern GNU/Linux systems, such as the adduser frontend to useradd , are actually Perl or even Python scripts.

In the next post, I'll describe the use of make for defining and automating building projects in a manner comparable to IDEs, with a nod to newer takes on the same idea with Ruby's rake . Posted in GNU/Linux Tagged assembler , compiler , gcc , interpreter , linker , perl , python , ruby , unix Unix as IDE: Editing Posted on February 11, 2012 by Tom Ryder The text editor is the core tool for any programmer, which is why choice of editor evokes such tongue-in-cheek zealotry in debate among programmers. Unix is the operating system most strongly linked with two enduring favourites, Emacs and Vi, and their modern versions in GNU Emacs and Vim, two editors with very different editing philosophies but comparable power.

Being a Vim heretic myself, here I'll discuss the indispensable features of Vim for programming, and in particular the use of shell tools called from within Vim to complement the editor's built-in functionality. Some of the principles discussed here will be applicable to those using Emacs as well, but probably not for underpowered editors like Nano.

This will be a very general survey, as Vim's toolset for programmers is enormous , and it'll still end up being quite long. I'll focus on the essentials and the things I feel are most helpful, and try to provide links to articles with a more comprehensive treatment of the topic. Don't forget that Vim's :help has surprised many people new to the editor with its high quality and usefulness.

Filetype detection

Vim has built-in settings to adjust its behaviour, in particular its syntax highlighting, based on the filetype being loaded, which it happily detects and generally does a good job at doing so. In particular, this allows you to set an indenting style conformant with the way a particular language is usually written. This should be one of the first things in your .vimrc file.

if has("autocmd")
  filetype on
  filetype indent on
  filetype plugin on
Syntax highlighting

Even if you're just working with a 16-color terminal, just include the following in your .vimrc if you're not already:

syntax on

The colorschemes with a default 16-color terminal are not pretty largely by necessity, but they do the job, and for most languages syntax definition files are available that work very well. There's a tremendous array of colorschemes available, and it's not hard to tweak them to suit or even to write your own. Using a 256-color terminal or gVim will give you more options. Good syntax highlighting files will show you definite syntax errors with a glaring red background.

Line numbering

To turn line numbers on if you use them a lot in your traditional IDE:

set number

You might like to try this as well, if you have at least Vim 7.3 and are keen to try numbering lines relative to the current line rather than absolutely:

set relativenumber
Tags files

Vim works very well with the output from the ctags utility. This allows you to search quickly for all uses of a particular identifier throughout the project, or to navigate straight to the declaration of a variable from one of its uses, regardless of whether it's in the same file. For large C projects in multiple files this can save huge amounts of otherwise wasted time, and is probably Vim's best answer to similar features in mainstream IDEs.

You can run :!ctags -R on the root directory of projects in many popular languages to generate a tags file filled with definitions and locations for identifiers throughout your project. Once a tags file for your project is available, you can search for uses of an appropriate tag throughout the project like so:

:tag someClass

The commands :tn and :tp will allow you to iterate through successive uses of the tag elsewhere in the project. The built-in tags functionality for this already covers most of the bases you'll probably need, but for features such as a tag list window, you could try installing the very popular Taglist plugin . Tim Pope's Unimpaired plugin also contains a couple of useful relevant mappings.

Calling external programs

Until 2017, there were three major methods of calling external programs during a Vim session:

Since 2017, Vim 8.x now includes a :terminal command to bring up a terminal emulator buffer in a window. This seems to work better than previous plugin-based attempts at doing this, such as Conque . For the moment I still strongly recommend using one of the older methods, all of which also work in other vi -type editors.

Lint programs and syntax checkers

Checking syntax or compiling with an external program call (e.g. perl -c , gcc ) is one of the calls that's good to make from within the editor using :! commands. If you were editing a Perl file, you could run this like so:

:!perl -c %

/home/tom/project/ syntax OK

Press Enter or type command to continue

The % symbol is shorthand for the file loaded in the current buffer. Running this prints the output of the command, if any, below the command line. If you wanted to call this check often, you could perhaps map it as a command, or even a key combination in your .vimrc file. In this case, we define a command :PerlLint which can be called from normal mode with \l :

command PerlLint !perl -c %
nnoremap <leader>l :PerlLint<CR>

For a lot of languages there's an even better way to do this, though, which allows us to capitalise on Vim's built-in quickfix window. We can do this by setting an appropriate makeprg for the filetype, in this case including a module that provides us with output that Vim can use for its quicklist, and a definition for its two formats:

:set makeprg=perl\ -c\ -MVi::QuickFix\ %
:set errorformat+=%m\ at\ %f\ line\ %l\.
:set errorformat+=%m\ at\ %f\ line\ %l

You may need to install this module first via CPAN, or the Debian package libvi-quickfix-perl . This done, you can type :make after saving the file to check its syntax, and if errors are found, you can open the quicklist window with :copen to inspect the errors, and :cn and :cp to jump to them within the buffer.

Vim quickfix working on a Perl file

Vim quickfix working on a Perl file

This also works for output from gcc , and pretty much any other compiler syntax checker that you might want to use that includes filenames, line numbers, and error strings in its error output. It's even possible to do this with web-focused languages like PHP , and for tools like JSLint for JavaScript . There's also an excellent plugin named Syntastic that does something similar.

Reading output from other commands

You can use :r! to call commands and paste their output directly into the buffer with which you're working. For example, to pull a quick directory listing for the current folder into the buffer, you could type:


This doesn't just work for commands, of course; you can simply read in other files this way with just :r , like public keys or your own custom boilerplate:

:r ~/.ssh/
:r ~/dev/perl/boilerplate/
Filtering output through other commands

You can extend this to actually filter text in the buffer through external commands, perhaps selected by a range or visual mode, and replace it with the command's output. While Vim's visual block mode is great for working with columnar data, it's very often helpful to bust out tools like column , cut , sort , or awk .

For example, you could sort the entire file in reverse by the second column by typing:

:%!sort -k2,2r

You could print only the third column of some selected text where the line matches the pattern /vim/ with:

:'<,'>!awk '/vim/ {print $3}'

You could arrange keywords from lines 1 to 10 in nicely formatted columns like:

:1,10!column -t

Really any kind of text filter or command can be manipulated like this in Vim, a simple interoperability feature that expands what the editor can do by an order of magnitude. It effectively makes the Vim buffer into a text stream, which is a language that all of these classic tools speak.

There is a lot more detail on this in my "Shell from Vi" post.

Built-in alternatives

It's worth noting that for really common operations like sorting and searching, Vim has built-in methods in :sort and :grep , which can be helpful if you're stuck using Vim on Windows, but don't have nearly the adaptability of shell calls.


Vim has a diffing mode, vimdiff , which allows you to not only view the differences between different versions of a file, but also to resolve conflicts via a three-way merge and to replace differences to and fro with commands like :diffput and :diffget for ranges of text. You can call vimdiff from the command line directly with at least two files to compare like so:

$ vimdiff file-v1.c file-v2.c
Vim diffing a .vimrc file

Vim diffing a .vimrc file Version control

You can call version control methods directly from within Vim, which is probably all you need most of the time. It's useful to remember here that % is always a shortcut for the buffer's current file:

:!svn status
:!svn add %
:!git commit -a

Recently a clear winner for Git functionality with Vim has come up with Tim Pope's Fugitive , which I highly recommend to anyone doing Git development with Vim. There'll be a more comprehensive treatment of version control's basis and history in Unix in Part 7 of this series.

The difference

Part of the reason Vim is thought of as a toy or relic by a lot of programmers used to GUI-based IDEs is its being seen as just a tool for editing files on servers, rather than a very capable editing component for the shell in its own right. Its own built-in features being so composable with external tools on Unix-friendly systems makes it into a text editing powerhouse that sometimes surprises even experienced users.

[Oct 31, 2017] Understanding Shared Libraries in Linux by Aaron Kili

Oct 30, 2017 |
In programming, a library is an assortment of pre-compiled pieces of code that can be reused in a program. Libraries simplify life for programmers, in that they provide reusable functions, routines, classes, data structures and so on (written by a another programmer), which they can use in their programs.

For instance, if you are building an application that needs to perform math operations, you don't have to create a new math function for that, you can simply use existing functions in libraries for that programming language.

Examples of libraries in Linux include libc (the standard C library) or glibc (GNU version of the standard C library), libcurl (multiprotocol file transfer library), libcrypt (library used for encryption, hashing, and encoding in C) and many more.

Linux supports two classes of libraries, namely:

Dynamic or shared libraries can further be categorized into:

Shared Library Naming Conventions

Shared libraries are named in two ways: the library name (a.k.a soname ) and a "filename" (absolute path to file which stores library code).

For example, the soname for libc is : where lib is the prefix, is a descriptive name, so means shared object, and is the version. And its filename is: /lib64/ . Note that the soname is actually a symbolic link to the filename.

Locating Shared Libraries in Linux

Shared libraries are loaded by (or ) and (or ) programs, where is the version. In Linux, /lib/ searches and loads all shared libraries used by a program.

A program can call a library using its library name or filename, and a library path stores directories where libraries can be found in the filesystem. By default, libraries are located in /usr/local/lib /usr/local/lib64 /usr/lib and /usr/lib64 ; system startup libraries are in /lib and /lib64 . Programmers can, however, install libraries in custom locations.

The library path can be defined in /etc/ file which you can edit with a command line editor.

# vi /etc/

The line(s) in this file instruct the kernel to load file in /etc/ . This way, package maintainers or programmers can add their custom library directories to the search list.

If you look into the /etc/ directory, you'll see .conf files for some common packages (kernel, mysql and postgresql in this case):

# ls /etc/
kernel-2.6.32-358.18.1.el6.x86_64.conf  kernel-2.6.32-696.1.1.el6.x86_64.conf  mariadb-x86_64.conf
kernel-2.6.32-642.6.2.el6.x86_64.conf   kernel-2.6.32-696.6.3.el6.x86_64.conf  postgresql-pgdg-libs.conf

If you take a look at the mariadb-x86_64.conf, you will see an absolute path to package's libraries.

# cat mariadb-x86_64.conf

The method above sets the library path permanently. To set it temporarily, use the LD_LIBRARY_PATH environment variable on the command line. If you want to keep the changes permanent, then add this line in the shell initialization file /etc/profile (global) or ~/.profile (user specific).

# export LD_LIBRARY_PATH=/path/to/library/file
Managing Shared Libraries in Linux

Let us now look at how to deal with shared libraries. To get a list of all shared library dependencies for a binary file, you can use the ldd utility . The output of ldd is in the form:

library name =>  filename (some hexadecimal value)
filename (some hexadecimal value)  #this is shown when library name can't be read

This command shows all shared library dependencies for the ls command .

# ldd /usr/bin/ls
# ldd /bin/ls
Sample Output
Oct 31, 2017 | =>  (0x00007ffebf9c2000) => /lib64/ (0x0000003b71e00000) => /lib64/ (0x0000003b71600000) => /lib64/ (0x0000003b76a00000) => /lib64/ (0x0000003b75e00000) => /lib64/ (0x0000003b70600000) => /lib64/ (0x0000003b70a00000)
/lib64/ (0x0000561abfc09000) => /lib64/ (0x0000003b70e00000) => /lib64/ (0x0000003b75600000)

Because shared libraries can exist in many different directories, searching through all of these directories when a program is launched would be greatly inefficient: which is one of the likely disadvantages of dynamic libraries. Therefore a mechanism of caching employed, performed by a the program ldconfig

By default, ldconfig reads the content of /etc/ , creates the appropriate symbolic links in the dynamic link directories, and then writes a cache to /etc/ which is then easily used by other programs.

This is very important especially when you have just installed new shared libraries or created your own, or created new library directories. You need to run ldconfig command to effect the changes.

# ldconfig
# ldconfig -V   #shows files and directories it works with

After creating your shared library, you need to install it. You can either move it into any of the standard directories mentioned above, and run the ldconfig command.

Alternatively, run the following command to create symbolic links from the soname to the filename:

# ldconfig -n /path/to/your/shared/libraries

To get started with creating your own libraries, check out this guide from The Linux Documentation Project(TLDP) .

Thats all for now! In this article, we gave you an introduction to libraries, explained shared libraries and how to manage them in Linux. If you have any queries or additional ideas to share, use the comment form below.

[Oct 28, 2017] Shared libraries with GCC on Linux

Oct 28, 2017 |
Most larger software projects will contain several components, some of which you may find use for later on in some other project, or that you just want to separate out for organizational purposes. When you have a reusable or logically distinct set of functions, it is helpful to build a library from it so that you don’t have to copy the source code into your current project and recompile it all the timeâ€"and so you can keep different modules of your program disjoint and change one without affecting others. Once it’s been written and tested, you can safely reuse it over and over again, saving the time and hassle of building it into your project every time.

Building static libraries is fairly simple, and since we rarely get questions on them, I won’t cover them. I’ll stick with shared libraries, which seem to be more confusing for most people.

Before we get started, it might help to get a quick rundown of everything that happens from source code to running program:

  1. C Preprocessor: This stage processes all the preprocessor directives . Basically, any line that starts with a #, such as #define and #include.
  2. Compilation Proper: Once the source file has been preprocessed, the result is then compiled. Since many people refer to the entire build process as compilation, this stage is often referred to as “compilation proper.†This stage turns a .c file into an .o (object) file.
  3. Linking: Here is where all of the object files and any libraries are linked together to make your final program. Note that for static libraries, the actual library is placed in your final program, while for shared libraries, only a reference to the library is placed inside. Now you have a complete program that is ready to run. You launch it from the shell, and the program is handed off to the loader.
  4. Loading: This stage happens when your program starts up. Your program is scanned for references to shared libraries. Any references found are resolved and the libraries are mapped into your program.

Steps 3 and 4 are where the magic (and confusion) happens with shared libraries.

[Oct 13, 2017] 1.3. Compatibility of Red Hat Developer Toolset 6.1

Oct 13, 2017 |

Compatibility Figure 1.1, "Red Hat Developer Toolset 6.1 Compatibility Matrix" illustrates the support for binaries built with Red Hat Developer Toolset on a certain version of Red Hat Enterprise Linux when those binaries are run on various other versions of this system. For ABI compatibility information, see Section 2.2.4, "C++ Compatibility" .

Figure 1.1. Red Hat Developer Toolset 6.1 Compatibility Matrix

[Oct 13, 2017] What gcc versions are available in Red Hat Enterprise Linux

Red Hat Developer Toolset
Notable quotes:
"... The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools. ..."
"... Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription. ..."
"... You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set. ..."
Oct 13, 2017 |

Red Hat provides another option via the Red Hat Developer Toolset.

With the developer toolset, developers can choose to take advantage of the latest versions of the GNU developer tool chain, packaged for easy installation on Red Hat Enterprise Linux. This version of the GNU development tool chain is an alternative to the toolchain offered as part of each Red Hat Enterprise Linux release. Of course, developers can continue to use the version of the toolchain provided in Red Hat Enterprise Linux.

The developer toolset gives software developers the ability to develop and compile an application once to run on multiple versions of Red Hat Enterprise Linux (such as Red Hat Enterprise Linux 5 and 6). Compatible with all supported versions of Red Hat Enterprise Linux, the developer toolset is available for users who develop applications for Red Hat Enterprise Linux 5 and 6. Please see the release notes for support of specific minor releases.

Unlike the compatibility and preview gcc packages provided with RHEL itself, the developer toolset packages put their content under a /opt/rh path. The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools.

Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription.

You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set.

For further information on Red Hat Developer Toolset, refer to the relevant release documentation: .

For further information on Red Hat Enterprise Linux Developer subscription, you may reference the following links:
* Red Hat Discussion
* Red Hat Developer Toolset Support Policy

[Oct 13, 2017] Building GCC from source

Oct 13, 2017 |


I've built newer gcc versions for rhel6 for several versions now (since 4.7.x to 5.3.1).

The process is fairly easy thanks to Redhat's Jakub Jelinek fedora gcc builds found on koji

Simply grab the latest src rpm for whichever version you require (e.g. 5.3.1 ).

Basically you would start by determining the build requirements by issuing rpm -qpR src.rpm looking for any version requirements:

rpm -qpR gcc-5.3.1-4.fc23.src.rpm | grep -E '= [[:digit:]]'
binutils >= 2.24
doxygen >= 1.7.1
elfutils-devel >= 0.147
elfutils-libelf-devel >= 0.147
gcc-gnat >= 3.1
glibc-devel >= 2.4.90-13
gmp-devel >= 4.1.2-8
isl = 0.14
isl-devel = 0.14
libgnat >= 3.1
libmpc-devel >= 0.8.1
mpfr-devel >= 2.2.1
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
systemtap-sdt-devel >= 1.3

Now comes the tedious part - any package which has a version higher than provided by yum fro your distro needs to be downloaded from koji , and recursively repeat the process until all dependency requirements are met.

I cheat, btw.

I usually repackage the rpm to contain a correct build tree using the gnu facility to use correctly placed and named requirements, so gmp/mpc/mpfr/isl (cloog is no longer required) are downloaded and untard into the correct path, and the new (bloated) tar is rebuilt into a new src rpm (with minor changes to spec file) with no dependency on their packaged (rpm) versions. Since I know of no one using ADA, I simply remove the portions pertaining to gnat from the specfile, further simplifying the build process, leaving me with just binutils to worry about.
Gcc can actually build with older binutils, so if you're in a hurry, further edit the specfile to require the binutils version already present on your system. This will result in a slightly crippled gcc, but mostly it will perform well enough.
This works quite well mostly.


The simplest method for opening a src rpm is probably yum install the rpm and access everything under ~/rpmbuild, but I prefer

mkdir gcc-5.3.1-4.fc23
cd gcc-5.3.1-4.fc23
rpm2cpio ../gcc-5.3.1-4.fc23.src.rpm | cpio -id
tar xf gcc-5.3.1-20160212.tar.bz2
cd gcc-5.3.1-20160212
cd ..
tar caf gcc-5.3.1-20160212.tar.bz2 gcc-5.3.1-20160212
rm -rf gcc-5.3.1-20160212
# remove gnat
sed -i '/%global build_ada 1/ s/1/0/' gcc.spec
sed -i '/%if !%{build_ada}/,/%endif/ s/^/#/' gcc.spec
# remove gmp/mpfr/mpc dependencies
sed -i '/BuildRequires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1/ s/.*//' gcc.spec
# remove isl dependency
sed -i '/BuildRequires: isl = %{isl_version}/,/Requires: isl-devel = %{isl_version}/ s/^/#/' gcc.spec
# Either build binutils as I do, or lower requirements
sed -i '/Requires: binutils/ s/2.24/2.20/' gcc.spec
# Make sure you don't break on gcc-java
sed -i '/gcc-java/ s/^/#/' gcc.spec

You also have the choice to set prefix so this rpm will install side-by-side without breaking distro rpm (but requires changing name, and some modifications to internal package names). I usually add an environment-module so I can load and unload this gcc as required (similar to how collections work) as part of the rpm (so I add a new dependency).

Finally create the rpmbuild tree and place the files where hey should go and build:

yum install rpmdevtools rpm-build
cp * ~/rpmbuild/SOURCES/
mv ~/rpmbuild/{SOURCES,SPECS}/gcc.spec
rpmbuild -ba ~/rpmbuild/SPECS/gcc.spec


Normally one should not use a "server" os for development - that's why you have fedora which already comes with latest gcc. I have some particular requirements, but you should really consider using the right tool for the task - rhel/centos to run production apps, fedora to develop those apps etc.

[Oct 13, 2017] devtoolset-3-gcc-4.9.1-10.el6.x86_64.rpm

This is a supported by RHEL package similar to one available from academic Linux
Oct 13, 2017 |

View more details Hide details

Build Host
Build Date
2014-09-22 12:43:02 UTC
GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD
Available From
Product (Variant, Version, Architecture) Repo Label
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.7 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.6 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.5 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.4 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rpms
Red Hat Software Collections (for RHEL Workstation) 1 for RHEL 6 x86_64 rhel-workstation-rhscl-6-rpms
Red Hat Software Collections (for RHEL Server) from RHUI 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rhui-rpms

[Oct 13, 2017] Installing GCC 4.8.2 on Red Hat Enterprise linux 6.5

Oct 13, 2017 |

suny6 , answered Jan 29 '16 at 21:53

The official way to have gcc 4.8.2 on RHEL 6 is via installing Red Hat Developer Toolset (yum install devtoolset-2), and in order to have it you need to have one of the below subscriptions:

You can check whether you have any of these subscriptions by running:

subscription-manager list --available


subscription-manager list --consumed .

If you don't have any of these subscriptions, you won't succeed in "yum install devtoolset-2". However, luckily cern provide a "back door" for their SLC6 which can also be used in RHEL 6. Run below three lines via root, and you should be able to have it:

wget -O /etc/yum.repos.d/slc6-devtoolset.repo

wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-cern

yum install devtoolset-2

Once it's done completely, you should have the new development package in /opt/rh/devtoolset-2/root/.

answered Oct 29 '14 at 21:53

For some reason the mpc/mpfr/gmp packages aren't being downloaded. Just look in your gcc source directory, it should have created symlinks to those packages:
gcc/4.9.1/install$ ls -ad gmp mpc mpfr
gmp  mpc  mpfr

If those don't show up then simply download them from the gcc site:

Then untar and symlink/rename them so you have the directories like above.

Then when you ./configure and make, gcc's makefile will automatically build them for you.

[Mar 24, 2012] GCC celebrates 25 years with the 4.7.0 release

Mar 22, 2012

GCC 4.7.0 is out; it is being designated as a celebration of GCC's first 25 years.

When Richard Stallman announced the first public release of GCC in 1987, few could have imagined the broad impact that it has had. It has prototyped many language features that later were adopted as part of their respective standards -- everything from "long long" type to transactional memory. It deployed an architecture-neutral automatic vectorization facility, OpenMP, and Polyhedral loop nest optimization. It has provided the toolchain infrastructure for the GNU/Linux ecosystem used everywhere from Google and Facebook to financial markets and stock exchanges. We salute and thank the hundreds of developers who have contributed over the years to make GCC one of the most long-lasting and successful free software projects in the history of this industry.

New features include software transactional memory support, more C11 and C++11 standard features, OpenMP 3.1 support, better link-time optimization, and support for a number of new processor architectures. See the GCC 4.7 changes page for lots more information.

Continuity problems

Posted Mar 22, 2012 17:05 UTC (Thu) by jd (guest, #26381) [Link]

The egcs vs. gcc fiasco comes to mind, but IIRC there have been a number of major reworkings. Certainly, there is an unbroken lineage from the original release to the present day - that's indisputable. Equally, it's indisputable that GCC is one of the most popular and powerful compilers out there. By these metrics, the claims are entirely correct.

However, having said that, the modern GCC wouldn't pass the "heraldry test" and there have been more than a few occasions when politics have delayed progress or disrupted true openness. The first of these is really a non-issue unless GCC applies for a coat of arms, but the second is more problematic. As GCC grows and matures, the more politics interferes, the more likely we are to see splintering.

Indeed, rival FLOSS compiler projects are taking off already, suggesting that the splintering has become enough of a problem for other projects to be able to reach critical mass.

Personally, I'd like to see GCC celebrate a 50 year anniversary as the top compiler. Language frontend developers can barely keep up with GCC, they won't be able to keep up with other compilers as well. Maximum language richness means you want as few core engine APIs as possible where the APIs have everything needed to support the languages out there. GCC can do that and has done for some time, which makes it a great choice.

But the GCC team (and the GLibC team) could do with being less provincial and more open. Those will be key to the next 25 years.

Continuity problems

Posted Mar 22, 2012 17:17 UTC (Thu) by josh (subscriber, #17465) [Link]

I agree entirely. Personally, I think GCC would benefit massively from a better culture of patch review. As far as I can tell, GCC's development model seems designed around having contributors do enough work to justify giving them commit access, and *then* they can actually contribute. GCC doesn't do a good job of handling mailed patches or casual contributors.

On top of that, GCC still uses Subversion for their primary repository, rather than a sensible distributed version control system. A Git mirror exists, but doesn't point to it anywhere prominent. As a result, GCC doesn't support the model of "clone, make a series of changes, and publish your branch somewhere"; only people with commit access do their work on branches. And without a distributed system, people can't easily make a pile of small changes locally (as separate commits) rather than one giant change, nor can they easily keep their work up to date with changes to GCC.

Changing those two things would greatly reduce the pain of attempting to contribute to GCC, and thus encourage a more thriving development community around it.

(That leaves aside the huge roadblock of having to mail in paper copyright assignment forms before contributing non-trivial changes, but that seems unlikely to ever change.)

Continuity problems

Posted Mar 22, 2012 18:51 UTC (Thu) by Lionel_Debroux (subscriber, #30014) [Link]

Another thing that would reduce the pain to contribute to GCC is a code base with a lower entry barrier. Despite the introduction of the plugin architecture in GCC, which already lowered it quite a bit, the GCC code base remains held as harder to hack on, less modular, less versatile than the LLVM/Clang code base.

The rate of progress on Clang has been impressive: self-hosting occurred only two years ago, followed three months later by building Boost without defect macros, and six months later by building Qt. On the day g++ 4.7 is released, clang++ is the only compiler whose C++11 support can be said to rival that of g++ (clang++ doesn't support atomics and forward declarations for enums, but fully supports alignment).

GCC isn't alone in not having switched to DVCS yet: LLVM, and its sub-projects, haven't either... However, getting commit access there is quite easy, and no copyright assignment paperwork is required.

Continuity problems

Posted Mar 23, 2012 3:20 UTC (Fri) by wahern (subscriber, #37304) [Link]

I've delved into both GCC and clang to write patches, albeit simple ones. GCC is definitely arcane, but both are pretty impenetrable initially. You can glance at the clang source code and fool yourself into thinking it's easy to hack, but there's no shortage of things to complain about.

Compiler writing is extremely well trodden ground. It shouldn't be surprising that it's fairly easy to go from 0-60 quickly. But it's a marathon, not a sprint. The true test of clang/LLVM is whether it can weather having successive generations of developers hack on it without turning into something that's impossible to work with. GCC has clearly managed this, despite all the moaning, and despite not being sprinkled with magic C++/OOP fairy dust. The past few years have seen tremendously complex features added, and clang/LLVM isn't keeping apace.

And as far as C++11 support, they look neck-and-neck to me:

Continuity problems

Posted Mar 22, 2012 20:59 UTC (Thu) by james_ulrich (guest, #83666) [Link]

Having lurked around gcc for close to 5 years, it seems to me like whole patch review culture simply stems from the fact that there is not a single person who cares about the compiler as a whole. Sure individuals care about their passes (IRA stands out here as being well taken care of), but seldom as to what happens outside of it. And when people do do reviews it very much feels like the response is just "I'll just ack it to get you off my back" which obviously doesn't do wonders for quality. Unless a Linus-of-GCC person comes along, I don't see much long-term future for the project.

The recent decision to move the project code base to C++ is also something that I think will actually hurt them badly in the long run. The GCC code base is very hard to read as-is and moving it to a language that is notorious for being hard to read and understand will not make things any better. (I'm well aware that some amazing pieces of code have been written in C++, but it is not a simple fix to the code cleanliness problem)

Continuity problems

Posted Mar 22, 2012 21:18 UTC (Thu) by HelloWorld (subscriber, #56129) [Link]

The GCC code base is very hard to read as-is and moving it to a language that is notorious for being hard to read and understand will not make things any better.
The GCC code base is actually a perfect example of things being convoluted because of missing functionality in the C language. C++, when used in a sensible way, is a way to fix this.

Continuity problems

Posted Mar 22, 2012 22:05 UTC (Thu) by james_ulrich (guest, #83666) [Link]

You mean C++ would magically make 2000+ line functions with variable declarations spanning over 50 lines easy to read? I think there are much lower hanging fruit in making the GCC code base readable before throwing C++ at it would be beneficial.

The decision to go with C++ seems (to me, an outside observer) to have been driven firstly by some people (I remember Ian Lance Taylor's name, but there where others pushing) "because I like to code in C++", rather than there being a pressing needed feature that would make the code clearer.

Continuity problems

Posted Mar 22, 2012 23:30 UTC (Thu) by elanthis (guest, #6227) [Link]

> You mean C++ would magically make 2000+ line functions with variable declarations spanning over 50 lines easy to read? I think there are much lower hanging fruit in making the GCC code base readable before throwing C++ at it would be beneficial.

Recompiling existing crappy C code with a C++ compiler does no such thing. It may very well provide the tools to rewrite that functions in a readable, sane way that C cannot easily do.

The one clear winner in C++ is data structures and templates. I cannot stress the importance of that enough.

The second you have to write a data structure that uses nothing but void* elements, or which has to be written as a macro, or which has to be copied-and-pasted for every different element type, you have a serious problem.

GCC is a heavy user of many complex data structures, many of which are written as macros. Compare this to the LLVM/Clang codebase, where such data structures are written once in clean, readable, testable, debugging C++ code, and reused in many places with an absolute minimum of fuss or danger.

I present you with the following link, which illustrates a number of very useful data structures in LLVM/Clang that are used all over the place, and which either do not exist, exist but are a bitch to use correctly, or which are copy-pastad all over the place in GCC:

Continuity problems

Posted Mar 23, 2012 6:36 UTC (Fri) by james_ulrich (guest, #83666) [Link]

I can see that the structures and constructs used in compilers lends itself very well to the features of C++.

My point is that the reason GCC is a mess is not because it is written in C. Even with C++, 2000 line functions need to be logically split, and 20 line if() statements with 5 levels deep subexpression nesting also need to be split to make it readable. These, and other, de-facto coding style idiosyncrasies need to be fixed (or at least agreed upon not to write code like that), which is in no way affected by the C/C++ decision.

GCC also has this "property", let's say, that code is never actually re-written, only new methods added in parallel to the old ones. Classic examples are the CC_FLAGS/cc0 thing and best of all reload. Everyone knew it sucked 15 years ago, yet only now are motions made in the form of LRA to replace it (which, BTW, are in now way motivated by using C++). The same can be said for the old register allocator, combine, etc. I somehow doubt that C++ alone would magically motivate anyone to start rewriting these old, convoluted but critical pieces.

Based on past observations my prediction for GCC-in-C++ is that all the old ugly code will simply stay, the style will not really change, but now it will ugly code mixed with C++ constructs.

Continuity problems

Posted Mar 23, 2012 2:00 UTC (Fri) by HelloWorld (subscriber, #56129) [Link]

I would have responded to your posting, but elanthis was faster at making my point.

Continuity problems

Posted Mar 23, 2012 3:31 UTC (Fri) by wahern (subscriber, #37304) [Link]

Indeed. If you read the clang source code, instead of having 2000 line functions, you have things implemented with something approximating 2000, single line functions. Both are impenetrable. Where GCC abuses macros, clang/LLVM abuses classing and casting. (You wouldn't think that possible, but analyze the clang code for awhile and you'll see what I mean.)

Continuity problems

Posted Mar 22, 2012 23:59 UTC (Thu) by slashdot (subscriber, #22014) [Link]

It's hard to read BECAUSE it is not in C++, obviously.

Although the fundamental reason it's hard to read is because it's not fully a library like LLVM/Clang, so they don't need to write clean reusable code with documented interfaces, and it shows.

The real question is: does it make sense to try to clean up, modularize and "C++ize" gcc?

Or it is simpler and more effective to just stop development on GCC, and move to work on a GPL or LGPL licensed fork of LLVM, porting any good things GCC has that LLVM doesn't?

Continuity problems

Posted Mar 23, 2012 6:59 UTC (Fri) by james_ulrich (guest, #83666) [Link]

Why does everyone pass around using C++ as some magic bullet that fixes all ugliness now and forever? It doesn't and it never will. The only lesson to be learnt from LLVM is that, when *starting from scratch*, a compiler can be well written in C++. Extrapolating that to "GCC's main problem is is not being written in C++ and doing so will fix all our problems" is plain idiotic.

Even if you start coding in C++, you still need to think about how to split long functions, ridiculous if() statements and make other general ugliness clearer. Take this (random example, there are much worse ones):

if (REG_P (src) && REG_P (dest)
&& ! fixed_regs[REGNO (src)]
&& ! fixed_regs[REGNO (dest)]

How exactly will C++ make this more obvious? Ofcourse it won't.

And, no, GCC not being a library is not it its main problem either.

Continuity problems

Posted Mar 23, 2012 4:06 UTC (Fri) by Cyberax (subscriber, #52523) [Link]

Rewriting stuff in another language is actually a good way to clean up the code.

Which gcc badly needs.
Continuity problems

Posted Mar 23, 2012 13:27 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]

Allow the cynic in me to make a possibly unfounded comment:

For some of GCC's ugliness, more of the improvement may come from the "rewrite" part than the "in C++" part. The "in C++" part just encourages a more thorough refactoring and rethinking of the problem, than a superficial tweaking-for-less-ugly.

In any case, nothing will fix GNU's ugly indenting standards as long as the language has a C/C++ style syntax. ;-)

"Heraldry test?"

Posted Mar 22, 2012 22:26 UTC (Thu) by flewellyn (subscriber, #5047) [Link]

I'm sorry, I didn't catch the reference. What is the "heraldry test"?

"Heraldry test?"

Posted Mar 23, 2012 2:26 UTC (Fri) by ghane (subscriber, #1805) [Link]

In this case the current GCC is from the bastard (and disowned) son of the family (EGCS), who took over the coat of arms when the legitimate branch of the family died out, and was blessed by all.

My grandfather's axe, my father changed the handle, I changed the blade, but it is still my grandfather's axe.

"Heraldry test?"

Posted Mar 23, 2012 20:43 UTC (Fri) by JoeBuck (subscriber, #2330) [Link]

The use of the metaphor is mistaken. Much of the code in the first egcs release that wasn't in GCC 2.7.x was already checked in to the FSF tree, and merges continued to take place back and forth. Thinking that GCC was somehow a completely new compiler with the same name after the EGCS/GCC remerger is just wrong. Furthermore it was the same people developing the compiler before and after. What really happened was that there was a management shakeup

Using the GNU Compiler Collection

RHEL 4 document

This manual documents how to use the GNU compilers, as well as their features and incompatibilities, and how to report bugs. It corresponds to GCC version 3.4.4. The internals of the GNU compilers, including how to port them to new targets and some information about how to write front ends for new languages, are documented in a separate manual. Comments on gcc

To use the gcc package from you MUSTinstall all of the SUNW developer packages that come on Sun's Solaris CDs.

The software used to create gcc 3.4.2 (the steps are very similar for earlier versions of gcc) was all from packages on These include the gcc-3.3.2, bison-1.875d, flex-2.5.31, texinfo-4.2, autoconf-2.59, make-3.80, and automake-1.9 packages. It may also be important to install the libiconv-1.8 package to use some of the languages in gcc 3.4.2. See also a comment below about the libgcc package.

There are differences between this version of gcc and previous 2.95.x versions on Solaris systems. For details, go to

In particular, gcc-3.4.2 offers support for the creation of 64-bit executables when the source code permits it. Programs like top, lsof, ipfilter, and others support, and may need, such compiles to work properly when running the 64-bit versions of Solaris 7, 8, and 9 on SPARC platforms. In some cases, simply using the -m64 flag for gcc during compiles (which may require either editing the Makefiles to add -m64 to CFLAGS or just doing gcc -m64 on command lines) works.

When you compile something with any of these compilers, the executable may end up depending on one or more of the libraries in /usr/local/lib such as An end user may need these libraries, but not want the entire gcc file set. I have provided a package called libgcc (right now this is for gcc-3.3.x but a version for 3.4.x is being created) for each level of Solaris. This contains all the files from /usr/local/lib generated by a gcc package installation. An end user can install this or a subset. You can determine if one of these libraries is needed by running ldd on an executable to see the library dependencies.

I am happy to hear about better ways to create gcc or problems that may be specific to my packages. Detailed problems with gcc can be asked in the newsgroup or related places.

Phil's Solaris hints

  1. If you want an "#ifdef solaris", the portable way is
     #if defined (__SVR4) && defined (__sun) 
    This should work on gcc, sun cc, and lots o other compilers, on both sparc and intel.

    If for some reason, you want to know that Sun forte CC (c++) compiler is being used, something that seems to work is

     #if defined(__SUNPRO_CC)
    Whereas for forte cc (regular C), you can use
     #if defined(__SUNPRO_C) 

  2. Use "gcc -Wall -O -Wno-unknown-pragmas" if you want a 'lint' type level of warnings while you compile. The pragmas bit is to stop warning about suns stupid pragmas in X headers.

Developing for Linux on Intel - For many Windows* developers, Linux* presents a learning challenge. Not only does Linux have a different programming model, but it also requires its own toolchain, as programmers must leave behind the Visual Studio* (VS) or Visual Studio* .NET (VS.NET) suites and third-party plug-in ecosystem. This article helps Windows developers understand the options available as they seek to replicate, on Linux or Solaris, the rich and efficient toolchain experience they've long enjoyed on Windows.

GCC on Solaris Tips & Tricks

Here at the U of C, we have a big grid of Sun Blade 1000 workstations, with gcc and g++ for compilers. There are some subtle differences between GCC/Solaris and GCC/x86-Linux, and this is a list of what I've come accross so far.

This file describes differences between GNU compilers on x86 machines and Solaris machines. These are all from experience, so who knows how accurate they are.

Note that I'm assuming the code is being developed on a Linux box, and then later being ported.

Compiler and tools tricks

Textbooks are full of good advices:

Use other aids as well. Explaining your code to someone else (even a teddy bear) is wonderfully effective. Use a debugger to get a stak trace. Use some of the commercial tools that check for memory leaks, array bounds violations, suspect code and the like. Step through your program when it has become clear that you have the wrong picture of how the code works.
- Brian W. Kernighan, Rob Pike, The practice of programming, 1999 (Chapter 5: Debugging)

Enable every optional warning; view the warnings as a risk-free, high-return investment in your program. Don't ask, "Should I enable this warning?" Instead ask, "Why shouldn't I enable it?" Turn on every warning unless you have an excellent reason not to.
- Steve Macguire, Writing solid code, 1993

Sounds familiar? But with which option? This page tries to answer that kind of question.

Constructive feedback is welcome.

g++, Solaris, and X11


gcc 2.95.1
Solaris 2.7
c++ -Wall -g -W -Wpointer-arith -Wbad-function-cast -Wcast-align -Wmissing-prototypes -Wstrict-prototypes -c -o glx/i_figureeight.o  -DHAVE_CONFIG_H -DDEF_FILESEARCHPATH=\"/usr/remote/lib/app-defaults/%N%C%S:/usr/remote/lib/app-defaults/%N%S\"  -I. -I.. -I../../xlock/ -I../.. -I/usr/openwin/include  -I/usr/remote/include/X11 -I/usr/remote/include -I/usr/dt/include -g -O2 ../../modes/glx/
In file included from ../../xlock/xlock.h:144,
                 from ../../modes/glx/i_twojet.h:7,
                 from ../../modes/glx/i_threejet.h:3,
                 from ../../modes/glx/i_threejetvec.h:3,
                 from ../../modes/glx/i_figureeight.h:3,
                 from ../../modes/glx/
/usr/openwin/include/X11/Xlib.h:2063: ANSI C++ forbids declaration `XSetTransientForHint' with no type

I maintain xlock and older versions no longer compile out of the box.
I am not in control of the include files that Sun distributes from
/usr/openwin.  A warning I can live with easier.

The only way I see around this was to require -fpermissive if using g++
on Solaris.  My worry is that -fpermissive may not be supported by all
versions of g++ and may cause another error. 
 /X\  David A. Bagley
(( X  [email protected]
 \X/  xlockmore and more

gcc 4.0 has been released.

The list of improvements is long - better optimization, more analysis and error checking, and the Fortran 90 compiler everybody has been waiting for. See the changelog for the full list.
Signed releases
(Posted Apr 22, 2005 5:14 UTC (Fri) by guest yem) (Post reply)

Still no strong signature for the tarballs. What is with these guys?
PS: congrats!

Signed releases
(Posted Apr 22, 2005 6:43 UTC (Fri) by subscriber nix) (Post reply)

Er, the tarballs all have OpenPGP signatures.
(You can't upload anything to and mirrors anymore without that.)

(Posted Apr 22, 2005 8:12 UTC (Fri) by guest yem) (Post reply)

Ah so they do. Sorry. was down and the mirror I checked isn't carrying the signatures.

All is well :-)

gcc 4.0 available
(Posted Apr 22, 2005 5:21 UTC (Fri) by guest xoddam) (Post reply)

Congratulations and thanks to the gcc maintainers. This will be a big
step forward as it stabilised and becomes a preferred compiler. Though
I'm sure some people will go on using gcc 2.95 forever :-)

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 6:27 UTC (Fri) by subscriber dank) (Post reply)

Hey, don't knock gcc-2.95.3.
It was a very good release in many ways,
and on some benchmarks, beats every
later version of gcc so far, up to
and including gcc-3.4.
(I haven't tested gcc-4.0.0 yet, but
I gather it won't change that. I'm hoping gcc-4.1.0 finally
knocks gcc-2.95.3 off its last perch, myself.)

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 6:45 UTC (Fri) by subscriber nix) (Post reply)

It's when the RTL optimizations start getting disabled that you'll see real speedups. Right now most of them are enabled but not doing as much as they used to, which is why GCC hasn't slowed down significantly in 4.x despite having literally dozens of new optimization passes over 3.x.

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 11:29 UTC (Fri) by guest steven97) (Post reply)

You are making two assumptions that are wrong:
1) rtl optimizers will be disabled. It appears this won't happen any
time soon.
2) rtl optimizers do less, so they consume less time. I wish that were
true. There is usually no relation between the number of transformations
and the running time of a pass. Most of the time is in visiting
instructions and doing the data flow analysis. That takes time even if
there isn't a single opportunity for a pass to do something useful.

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 12:39 UTC (Fri) by subscriber nix) (Post reply)

1) rtl optimizers will be disabled. It appears this won't happen any time soon.

I'm aware that you're involved in an ongoing flamewar, er, I mean animated discussion in this area, and I'm staying well out of it :)

If the damned things weren't so intertwined they'd be easier to ditch: and indeed it's their intertwined and hard-to-maintain nature that makes it all the more important to try to ditch them (or at least simplify them to an absolute minimum).

Obviously some optimizations (peepholes and such) actually benefit from being performed at such a low level, but does anyone really think that loop analysis, for instance, should be performed on RTL? It is, but its benefits at that level are... limited compared to its time cost.

2) rtl optimizers do less, so they consume less time. I wish that were true. There is usually no relation between the number of transformations and the running time of a pass. Most of the time is in visiting instructions and doing the data flow analysis. That takes time even if there isn't a single opportunity for a pass to do something useful.

Er, but the compiler's not slowed down significantly even with optimization on. Are the tree-ssa passes really so fast that they add nearly no time to the compiler's runtime? My -ftime-report dumps don't suggest so.

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 18:02 UTC (Fri) by guest steven97) (Post reply)

Most tree optimizers _are_ fast, but not so fast that they consume no
time at all. But they optimize so much away that the amount of RTL
produced is less. If that is what you had in mind when you said "RTL
optimizers do less", then yes, there is just less RTL to look at, so
while most RTL passes still look at the whole function, they look at a
smaller function most of the time. That is one reason.

The other reason why GCC4 is not slower (not much ;-) than GCC3 is that
many rather annoying quadratic algorithms in the compiler have been
removed. With a little effort, some of the patches for that could be
backported to e.g. GCC 3.4, and you'd probably get a significantly faster
GCC3. Other patches were only possible because there is an optimization
path now before RTL is generated.

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:02 UTC (Fri) by subscriber nix) (Post reply)

That's what I meant, yes, and it's so intuitively obvious that it amazed me to see you disagreeing. Obviously less code -> less work -> less time!

I didn't mean the RTL optimizers had become intrinsically faster (except inasmuch as code's been ripped out of them).

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 16:03 UTC (Fri) by subscriber Ross) (Post reply)

I'm not sure he was talking about the speed of the compiler. I read it as
talking about the quality of the generated code. I could easily be wrong

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 17:06 UTC (Fri) by guest mmarq) (Post reply)

" I'm not sure he was talking about the speed of the compiler. I read it as talking about the quality of the generated code. "

And that is what should matter the most; *The end user*. Because if people complain about the speed of the compilation process they should change to a better computer, perhaps a NForce 4/5 based for Atlhon64 or Pentium4 or the latest VIA chipsets with support for SLI and those Dual Core CPUs coming out soon!...

... he, right!, support for those beasts aren't good enough for Linux right now! But that isn't much different from what was in, say 2001!...

I've no intention to add to any flamewar, but my point is as exposed above that the community trend to discuss heavily on less important issues. The Linux commercial party are battling for scratchs while the majority of end users not only don't know, but they don't want to know, because Linux like Unix is viewed as something for geeks or bearded gurus,..., and worst of all standarts go at the speed of a snail, because the philosophy is add features and avoid standards.

There are hundreds of distros but i havent seen one that adds a report of tested hardware configurations (if anyone know one please link it), or care about those, or care about being *religious* about standards, because that is the only way to expose the masses of low tech end users to the same 'methods' and 'ways', for a much larger period of time, adding the change for distros to get very good at the 'interface for low tech users',and in consequence gain a larger adoption percentage.

Open Source community is closing itself inside it's own technical space! And when, if, that happens completly, then it's another Unix story almost like a carbon copy.

Hardware databases
(Posted Apr 22, 2005 21:16 UTC (Fri) by subscriber sdalley) (Post reply)

Novell/Suse makes a reasonable attempt, see the links on .

Hardware databases
(Posted Apr 24, 2005 17:09 UTC (Sun) by guest mmarq) (Post reply)

" Novell/Suse makes a reasonable attempt,... "
I've done a search on that site, on 'Certified Hardware' for hardware/software, for the companys ASUSTEK, ECS, EPOX, DFI, GIGABYTE and INTEL with the the keywords "motherbord", and without any keyword which means every piece of hardware.

Since those are manufactores that also do Graphic Boards besides Mobos, they would represent perhaps more than 70% of a common desktop system, and representing perhaps more than 70% of the market, very ahead of the integrators HP/Compaq, IBM, gateway and DELL all put togheter. And Since some of those manufactors also do server 'iron', Mobos or systems, i belive that is represented, perhaps not far from 90% of *all* (server+desktop) deployed base.

The only result i've got was for ASUSTEK showing old Network Servers, and INTEL showing LAN driver(net adpaters), RAID adapters and Network Servers, some aging a lot??!!!

Understanding what line of business Novell is in, still i consider this very far from reasonable... not reasonable even to them if they want to survive in a medium to long term!.

gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:36 UTC (Fri) by subscriber nix) (Post reply)

The standards-compliance changes in GCC at least (and there have been many) weren't a matter of making GCC reject noncompliant code so much as they were one of making it accept compliant code it'd been wrongly rejecting. I mean, nobody waded into the compiler thinking `Shiver me lexers, what widely-used extension shall I remove today? Arrr!' --- it's more that, say, the entire C++ parser was rewritten (thank you, Mark!), and in the process a bunch of extensions got dropped because they were rarely-used and didn't seem worth reimplementing, and a bunch of noncompliant nonsense that G++ had incorrectly accepted was now correctly rejected, *simply because accepting the nonsense had always been a bug*, just one that had previously been too hard to fix.

(Oh, also, GCC is very much more than `the Linux compiler'. All the free BSDs use it, many commercial Unix shops use it, Cygwin uses it, Apple relies on it for MacOSX, and it's very widely used in the embedded and (I believe) avionics worlds. Even if, with the wave of a magic wand, the Hurd was perfected and Linux dissolved into mist tomorrow, GCC would still be here.)

gcc2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:22 UTC (Fri) by subscriber nix) (Post reply)

Well in that case he's likely wrong :) Even though the focus of much of the 3.x series was standards-compliance, not optimization, and 3.x didn't have tree-ssa, there have been some notable improvements in that time, like the new i386 backend.

Alas, major performance gains on register-poor arches like IA32 may be hard to realize without rewriting the register allocator --- and the part of GCC that does register allocation (badly) also does so much else, and is so old and crufty, that rewriting it is a heroic task that has so far defeated all comers...

gcc-2.95.3 was a good vintage...
(Posted Apr 25, 2005 6:38 UTC (Mon) by guest HalfMoon) (Post reply)

Alas, major performance gains on register-poor arches like IA32 may be hard to realize without rewriting the register allocator ...

Then there's Opteron, or at least AMD64 ... odd how by the time GCC starts to get serious support for x86, the hardware finally started to grow up (no thanks to Intel of course).

gcc-2.95.3 was a good vintage...
(Posted Apr 24, 2005 20:35 UTC (Sun) by subscriber dank) (Post reply)

Yes, I meant the quality of the generated code.(I care about the speed of compilation, too, but gcc-4.0 is doing fine in that area, I think.)

I'd love to switch to the new compiler, because I value its improved standards compliance, but it's hard for me to argue for a switch when it *slows down* important applications.

I don't need *better* code before I switch, but I do need to verify there are no performance regressions. And sadly, there are still some of those in apps compiled with gcc-3.4.1 compared to gcc-2.95.3. As I said in my original post, I don't expect later versions to definitively beat 2.95.3 until gcc-4.1.

The whole point of gcc-4.0 is to shake out the bugs in the new tree optimization stuff. I am starting to build and test all my apps with gcc-4.0 and plan to help resolve any problems I find, because I want to be ready for gcc-4.1.

Strings are now unavoidably constants
(Posted Apr 22, 2005 11:35 UTC (Fri) by subscriber gdt) (Post reply)

The removal of the -fwritable-strings option will flush out any code yet to be moved to ANSI C. It will be interesting to see how much there is.

Our user group had an experience this week of a program SEGVing from writing to a string constant. The beginner programmer had typed in example code from a C textbook which was popular five years ago, and was obviously confused and concerned that it didn't work. So not all pre-ANSI code will be old.

Strings are now unavoidably constants
(Posted Apr 22, 2005 21:39 UTC (Fri) by subscriber nix) (Post reply)

The removal of the -fwritable-strings option will flush out any code yet to be moved to ANSI C.

I think the removal of -traditional is more likely to do that --- and that happened in 3.3.x ;)

Strings are now unavoidably constants
(Posted Apr 22, 2005 22:38 UTC (Fri) by subscriber gdt) (Post reply)

Sorry, I expressed myself poorly. There's a lot of code out there that is K&R with ANSI function definitions. I'm interested to see how many of these break from this semantic change.

I've no idea if it is a little or a lot. It will be interesting to see.

Strings are now unavoidably constants
(Posted Apr 23, 2005 21:30 UTC (Sat) by subscriber nix) (Post reply)

Well, Apple's preserved the option in their tree (used for MacOS X) because they have some stuff they know breaks...

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites Programming - C-C++, Page 1

gcc for Win32

GCC Development Toolchain for x86-win32 targets

Welcome to the GCC Library

GNU Compilers on Win32

The Minimalist GNU Win32 Package is not a compiler or a compiler suite.

The Minimalist GNU Win32 Package (or Mingw) is simply a set of header files and initialization code which allows a GNU compiler to link programs with one of the C run-time libraries provided by Microsoft. By default it uses CRTDLL, which is built into all Win32 operating systems. This means that your programs are small, stand-alone, and reasonably quick. I personally believe it is a good option for programmers who want to do native Win32 programming, either of new code or when creating a native port of an application. For example, the latest versions of gcc itself, along with many of the supporting utilities, can be compiled using the Mingw headers and libraries.

Visit the Mingw mailing list on the Web at Also see the Mingw32 FAQ at

Mingw was mentioned (in passing, down near the bottom... but that's enough for me) in an interview at O'Reilly. Aside from another mention in a Japanese magazine called "C Magazine" (as a sidebar in an article about Cygwin) this is only the second time I know of that Mingw was mentioned by any 'serious' media. Neat huh?

Mingw based Compilers

Both of these compiler suites, based on gcc, were built with Mingw32 and include Mingw32.

Previous Releases

These are old and only of historical interest. Real developers interested in the source code should get the much newer Mingw runtime from Mumit Khan's ftp site.


The Cygwin Project by Cygnus Solutions is an attempt to provide a UNIX programming environment on Win32 operating systems. As part of this effort the suite of GNU software development tools (including gcc, the GNU C/C++ compiler) has been ported to Win32. The Cygwin project lead directly to the first versions of gcc that could produce Win32 software and allowed me to set up the first version of Mingw32.

For more information on Cygwin, including where to download it and how to subscribe to the Cygwin mailing list, visit the Cygwin Project Page.

Also try this page for more information about GNU programming tools on Win32, and how to install and use them.

Free Win32 Compilers (Other than gcc)

I would like to list a bunch of them but... this is all I could find.

A Win32 Programming Tutorial

Under Construction...

A tutorial on how use GNU tools and other free tools to write Win32 programs.

Extra Utilities

I also have some pointers and downloads of extras, useful tools and alternatives for GNU-Win32 or Minimalist GNU-Win32.

Slashdot GCC 4.0.0 Released

Re:Misplaced blame (Score:4, Funny)
by Screaming Lunatic (526975) on Friday April 22, @04:55AM (#12311278)
Blame the standards committee, not the GCC maintainers.

Insightful? Jesus eff-ing Christ. Now the slashbots don't like standards. I bet you wouldn't be presenting the same argument if this discussion was about the transition from MSVC 6.0 to 7.0/7.1.

Funny? Jesus eff-ing Christ. When did pointing out the hypocricy of slashdot group think become funny? I don't get which part of my original statement is funny.

Re:Misplaced blame (Score:5, Funny)
by Mancat (831487) on Thursday April 21, @11:40PM (#12310121)
Mechanic: Sir, your car is ready.

Customer: Thanks for fixing it so quickly!

Mechanic: We didn't fix it. We just brought it up to standards. Oh, by the way, your air conditioning no longer works, and your rear brakes are now disabled.

Customer: Uhh.. What?

Mechanic: That's right. The standard refrigerant is now R-134A, so we removed your old R-14 air conditioning system. Also, disc brakes are now standard in the autmotive world, so we removed your drum brakes. Don't drive too fast.

Customer: What the fuck?

Mechanic: Oh, I almost forgot. Your car doesn't have airbags. We're going to have to remove your car's body and replace it with a giant tube frame lined with air mattresses.


Build-Install OpenSolaris at by Rich Teer

This is the first of two articles in which we describe how to acquire and build the source code for OpenSolaris. The first article provides all the necessary background information (terminology, where to get the tools, and so on) and describes a basic compilation and installation, and the second article will describe a more complicated compilation and installation.

These articles describe how to build and install OpenSolaris; they are not intended to be an "OpenSolaris developers' guide", so information beyond that which is required for building and installing OpenSolaris is not included. This should not be seen as a problem, however, because (as with most other large-scale open source projects) the number of active OpenSolaris developers who will need this knowledge is likely to be small compared to the number of people who will want to build it for their own edification.

These articles assume at least a passing familiarity with building major software projects and some C programming. It is unlikely that someone who is struggling to compile "Hello World" would be trying to compile OpenSolaris! However, we will otherwise assume no knowledge of building Solaris, and describe all the necessary steps.

Developing for Linux on Intel - For many Windows* developers, Linux* presents a learning challenge. Not only does Linux have a different programming model, but it also requires its own toolchain, as programmers must leave behind the Visual Studio* (VS) or Visual Studio* .NET (VS.NET) suites and third-party plug-in ecosystem. This article helps Windows developers understand the options available as they seek to replicate, on Linux, the rich and efficient toolchain experience they've long enjoyed on Windows.


Re:i'm having horrible flashbacks... (Score:3, Interesting)
by den_erpel (140080) on Friday April 22, @01:24AM (#12310625)
( | Last Journal: Tuesday July 29, @03:24AM)

Ah things no longer compiling :) True, it was very annoying and made you go through an extra code review while porting your code forward.

In the long term, I think it was a very good thing: coding C (and C++, but didn't have that much experience on that) got much more strict and in my experience, removes a lot of possible problems later on.

If someone had a lot of problems porting 2.95 to 3.2, his code needed to be reviewed anyway. It kind of removes the "boy" from "cowboys" in coders (experience is drawn from not-so-embedded systems).

Based on the remarks obtained from the compiler for embedded code (they made a lot of sense) during the switch and gcc becoming more strict, we now even compile everything with -Werror.

In our deeply embedded networking code, we got a speed improvement of 20% just switching to 3.4 (from 3.3) :) I am going to try to compile a new PowerPC toolchain one of these...

Go GCC! [ Reply to This | Parent ]



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: December 26, 2017