Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Python -- Scripting language with generators and coroutines

News Scripting Languages

Best Python books for system administrators

Recommended Links Python Braces Debate Command-Line Syntax and Options pdb — The Python Debugger
Tutorials Coroutines Braces Problem Programming environment Python IDEs Pycharm IDE Jython
Python istallation Compiling Python from source Installing Python 3 from RPMs Installing Python Packages      
Debugging in Python Algorithms Quotes Python history Tips Etc  
A new competitor seemed to emerge out of the woodwork every month or so. The first thing I would do, after checking to see if they had a live online demo, was look at their job listings. After a couple years of this I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was.
  • The safest kind were the ones that wanted Oracle experience. You never had to worry about those.
    You were also safe if they said they wanted C++ or Java developers.
  • If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers.
  • If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

-- Paul Graham co-founder, Viaweb

This is the forth page of an ongoing series of pages covering major scripting language topics for the Unix/Linux system administrators  (others cover Unix shells, Perl, and  TCL). based of Professor Bezroukov lectures.  

Python now is becoming the language Unix/Linux sysadmin must know at least on superficial level as for many users it is the primary language iether for development or for writing supporting scripts. Python has been hailed as a really easy to learn language. It's not completely true, but it get a strong foothold at universities (repeating path of Pascal in this regard, but on  a new level). The rest is history.

As most sysadmins know Perl, Python should be easy to learn at basic level, as key concepts are similar and Python is the language influenced by Perl, incorporating some European and, more specifically, Niklaus Wirth ideas about programming languages. Python's core syntax and certain aspects of its philosophy were also influenced from ABC

But in reality is is pretty difficult and even annoying to learn for accomplished Perl programmers. You feel like a accomplished ball room dancer put on the ice ring. You need to re-learn a lot of things. And fall a lot of times.  Moreover, all this hype  about Python being easier to read and understand is what is it: programming language hype. They reflect one thing: inability to distinguish "programming in the small" from "programming in the large". I think Philic J Guo put it well (Philip Guo - Hey, your Python code is unreadable! (

I argue that code written in Python is not necessarily any more easily readable and comprehensible than code written in other contemporary languages, even though one of Python's major selling points is its concise, clear, and consistent syntax. While Python code might be easier to comprehend 'in-the-small', it is not usually any easier to comprehend 'in-the-large'.

Moreover Python program often suffer from abuse of OO (sometimes to a horrible level), leading to OO-spaghetti and programs several times more complex than they should be.

Python programs can be decomposed into modules, statements, expressions, and objects, as follows:

  1. Programs are composed of modules.
  2. Modules contain statements.
  3. Statements contain expressions.
  4. Expressions create and process objects.

First versions of Python did not introduced any new ideas and by-and large was just more cleaner rehash of ideas Perl with the addition of idea of modules from Module 3.  It was released in 1991, the same year as Linux. Wikipedia has a short article about Python history:  History of Python - Wikipedia, the free encyclopedia

But starting with version 2.2 it added support of semi-coroutines in a platform independent way (via generators, the concept inspired by Icon) and became class of its own.  I think that this makes Python in certain areas a better scripting language then other members of the "most popular troika" (Perl, PHP and JavaScript). Availability of semi-coroutines favorably distinguish Python from Perl.

Python was lucky that for some time it enjoyed full support of Google (which still employs Python creator,  Guido van Rossum). In addition Microsoft also supported it indirectly (via Iron Python and support In Visual Studio and other tools like Expressions Web).  That created some money flow for development and put Python in much better position then Perl, which after it lost O'Reilly does not have powerful corporate sponsor and this development is lingering in obscurity.

Even in 2017 Python still enjoy some level of support of Google, and that are no similar sponsor for iether Ruby or R, or Perl.  Out of modern languages only  JavaScript survived a semi-abandoned status (after collapse of Netscape),  but it paid heavy price both in terms of speed of development of the language and popularity :-(.

Some of the main reasons that Python is so popular is so called "first language effect"-- language that university taught to students as the  first language has tremendous advantage over alternatives.

Python was adopted as the first language for teaching programming at many universities, and that produces steady stream of language users and some enthusiasts. 

Python interpreter has interactive mode, suitable for small examples and it more or less forgiving (no obligatory semicolon at the end of the statements like in Perl, not C-style curvy bracket delimited blocks (the source of a lot of grief for beginners as missing  curvy bracket is difficult to locate). Also it has more or less regular lexical structure and simple syntax (due to its very complex lexical structure and syntax Perl is as horrible as a beginner language; although semantic if Perl is better, more understandable,  for beginners then semantic of Python).  And after the language found its niche in intro university courses  the proverb "nothing succeed like success" is fully applicable.


Despite some claim that Python adhere to simplicity this is simply is not tire. This is a large non-orthogonal language, not that different  in this respect from Perl just with the different set of warts. Python is a large language and a decent textbook such as  Learning Python, 5th Edition by Mark Lutz (2013) is  over thousand pages. Modest intro like Introducing Python Modern Computing in Simple Packages by Bill Lubanovic is 484 pages. Slightly more expanded intro  Python Essential Reference (4th Edition)  by David Beazley is over 700 pagers. O'Reilly cookbook is 706 pages. And so on and so forth. Humans can't learn such a large language and need to operate with  a subset.

Both Perl and Python belong to the class of language with an attitude of "whatever gets the job done" although Python pretends (just pretends) to follow KISS principle: on the words (but  not in practice) Python designers seems preferring simplicity and consistency in design to flexibility that Perl advocates.  

2.7 vs. 3.6 problem

There is no single Python language. There are two dialects which are often called 3.x and 2.x.

Currently Python 2.7 is the dominant version of Python for scientific and engineering computing (although the standard version that comes with RHEL 6.x is still python 2.6). 64-bit version is dominant . But 3.x version is promoted by new books and gain in popularity too.  It is now taught in universities, which instantly pushed it into mainstream. Infatuation with Java in US universities ended, and ended for good, because there is nothing interesting in Java -- it is one step forward and two step back, kind of Cobol for XXI century.

Python 3 has  better support for coroutines (here is quote form Fluent Python, chapter 16):

The infrastructure for coroutines appeared in PEP 342 — Coroutines via Enhanced Generators, implemented in Python 2.5 (2006): since then, the yield keyword can be used in an expression, and the .send(value) method was added to the generator API. Using .send(…), the caller of the generator can post data that then becomes the value of the yield expression inside the generator function. This allows a generator to be used as a coroutine: a procedure that collaborates with the caller, yielding and receiving values from the caller.

In addition to .send(…), PEP 342 also added .throw(…) and .close() methods that respectively allow the caller to throw an exception to be handled inside the generator, and to terminate it. These features are covered in the next section and in “Coroutine Termination and Exception Handling”.

The latest evolutionary step for coroutines came with PEP 380 - Syntax for Delegating to a Subgenerator, implemented in Python 3.3 (2012). PEP 380 made two syntax changes to generator functions, to make them more useful as coroutines:

These latest changes will be addressed in “Returning a Value from a Coroutine” and “Using yield from”.

Let’s follow the established tradition of Fluent Python and start with some very basic facts and examples, then move into increasingly mind-bending features.

Modules and OO

The key Python feature is the ability to user modules. In this sense it is a derivative of Modular 3.  OO features are bolted on top of this.

While Python provides OO features, like C++ can be used without them.  They were added to language, not present from the very beginning. And like in many other languages with OO features they became fashionable and promoted the language. Also OO features are badly abused in Python scripts -- I saw many cases when Linux/Unix maintenance script were written using OO features. which makes them less maintainable and  the code more verbose. 

While pointers to memory structures (aka objects) is how OO is implemented, unlike Perl Python it does not provide pointers as a separate data type.  You can use pointers via object oriented framework but generally this is a perversion.  In a decent language pointers should be present as a separate data type.

Python is shipped with all version of Linux not other Unix flavors

Currently Python is shipped as standard component only with Linux and FreeBSD. Nether Solaris, not AIX or HP-UX include Python by default (but they do include Perl).

Quantity and quality of library of available modules

By the number and quality of available modules Python is now competitive and in some areas exceeds Perl with its famous CPAN. Like Perl, Python also has large library of standard modules shipped with the interpreter. Among them

But Python also faces competition from more modern languages such  as Ruby and R. Although still less popular Ruby competes with Python, especially for among programmers who value coroutines paradigm of software development (and it is really paradigm, not just a language feature). Python provides generators that are close enough, but still...


Currently Python has the most developed ecosystem among all scripting languages with a dozen of high quality IDE available (Perl has almost none, although Komodo Edit can be used as a proxy) . Probably pycharm being  the most popular (and it has a free version for individual developers)

Starting from version 2.7 debugger is decent and supports almost the same set of operation as famous Perl debugger. In other words at last it manage to bridge the gap with Perl debugger.  Better late then never ;-).

There are probably more  books published about Python then any other scripting language. In 2017 books about Python are still baked like there is no tomorrow, but most of them are of poor or average quality.  Especially written  by OO zealots. This amount of "junk books" is an interesting and pretty unique feature of the language.

Many Python books should be avoided at all costs as they do not explain the language by obscure it.  You are warned.

One sign of the popularity of a scripting language is availability of editors which use it as macro language. Here Python outshine Perl by several orders of magnitude

see PythonEditors - Python Wiki

Komodo is a high quality middleweight free editor  that supports writing macros in Python.  See Macro API Reference

Python get a foot into numeric computing

Python also get a foot into numeric computing and is via SciPy/NumPy. It is now widely used in molecular modeling area, which was previously dominated by compiled language (with Fortran being the major). In genomic applications it also managed partially displace Perl (although quality of regular expression integration into the language is much lower in Python), but now R is pushing Python out in this domain.

Indentation as a proxy for nesting -- questionable design decision

Python imitates FORTRAN 4 -- it uses an indentation to denote nesting, similarly like Fortran 4 used it to distinguish between labels and statements ;-).

That creates problems  if tabs are used as editor and Python interpreter might have different settings for tabs. And that can screw nesting of statements.  This also created problems with diffs and putting patches with patch. 

Multiline statements in Python are iether detected by unbalanced brackets or can be explicitly marked with a backslash. Ability to close multiple blocks by just changing indentation is a plus only for short program, visible on  the screen. At the  same time possibility of mixing tabs and blanks for indentation is a horror show. You need specifically check if your Python program accidentally contain tabs and convert them to blanks. 

By relegating block brackets to the lexical level of blank space and comments Python failed to make a necessary adjustment and include pretty printer into interpreter (with the possibility of creating pretty printed program from pseudo-comments). Such a pretty printer actually needs to understand two things: the format of comments that suggest indenting like

#{ <label>

#} <label>

and the current number of spaces in the tab like  pragma tab = 4. The interesting possibility is that in pretty printed program those comments can be removed and after a reading pretty printed program into the editor reinstated automatically. Such feature can be implemented in any scriptable editor.

My impression is that few people understand that C  solution for blocks ({ } blocks) was pretty weak in comparison with its prototype language (PL/1): it does not permit nice labeled closer of blocks like

A:do ... end A;

in PL/1. IMHO introduction of a pretty printer as a standard feature of both a compiler and a GUI environment is long overdue and here Python can make its mark.

By aditing indentation as proxy for nesting Python actually encourages a programmer to use a decent editor, but we knew that already, right?  This design decision also  narrows down the possible range of coding styles and  automatically leads to more compact (as for the number of lexical tokens) programs (deletion of curly brackets usually help to lessen the number of lines in C or Perl program by 20% or more). 

Difficulties of adapting to the language for Perl programmers

Although Python as a  scripting language used Perl as prototype and its features are generally similar to Perl, Perl programmers experience difficulties adapting to the language.  They're not overly similar in implementation details, nor even remotely similar in syntax. Their extension mechanisms also took different directions. 

Final notes

We all understand that in real life better language seldom win (look at Java).  Luck plays tremendous role in  determining languages popularity. Best commercially supported language that satisfy current fashion has better chances. Python managed  to ride the wave of enthusiasm toward OO programming, which (by-and-large) unfair relegated Perl to the second class languages.  And it is not a bad scripting language so in a way success of Python is our success too.

Python now also has several different implementation of the interpreter, which are a clear sign of both popularity and maturity of the language:  Along with CPython interpreter (which is standard) there is quite popular Jython  which uses JVM and thus integrates well with Java, and Iron Python which is Microsoft implementation (Python -- programming language)

The mainstream Python implementation, also known as CPython, is written in C compliant to the C89 standard, and is distributed with a large standard library written in a mixture of C and Python. CPython ships for a large number of supported platforms, including Microsoft Windows and most modern Unix-like systems. CPython was intended from almost its very conception to be cross-platform; its use and development on esoteric platforms such as Amoeba alongside more conventional ones like Unix or Macintosh has greatly helped in this regard.

Stackless Python is a significant fork of CPython that implements microthreads. It can be expected to run on approximately the same platforms that CPython runs on.

There are two other major implementations: Jython for the Java platform, and IronPython for the .NET platform. PyPy is an experimental self-hosting implementation of Python, in Python, that can output a variety of types of bytecode, object code and intermediate languages.

Several programs exist to package Python programs into standalone executables, including py2exe, PyInstaller, cx_Freeze and py2app.

Many Python programs can run on different Python implementations, on such disparate operating systems and execution environments, without change. In the case of the implementations running on top of the Java virtual machine or the Common Language Runtime, the platform-independence of these systems is harnessed by their respective Python implementation.

Many third-party libraries for Python (and even some first-party ones) are only available on Windows, Linux, BSD, and Mac OS X.

There is also a dialect called Stackless Python which adds support for coroutines, communication channels and task serialization.

Python also has better interface with C programs than Perl, which allow to write extension modules in C.

Nikolai Bezroukov

Top Visited
Past week
Past month


Old News ;-)

[Apr 04, 2019] Fascism A Warning by Madeleine Albright

Junk author, junk book of the butcher of Yugoslavia who would be hanged with Bill clinton by Nuremberg Tribunal for crimes against peace. Albright is not bright at all. she a female bully and that shows.
Mostly projection. And this arrogant warmonger like to exercise in Russophobia (which was the main part of the USSR which saved the world fro fascism, sacrificing around 20 million people) This book is book of denial of genocide against Iraqis and Serbian population where bombing with uranium enriched bombs doubled cancer cases.If you can pass over those facts that this book is for you.
Like Robert Kagan and other neocons Albright is waiving authoritarism dead chicken again and again. that's silly and disingenuous. authoritarism is a method of Governance used in military. It is not an ideology. Fascism is an ideology, a flavor of far right nationalism. Kind of "enhanced" by some socialist ideas far right nationalism.
The view of fascism without economic circumstances that create fascism, and first of immiseration of middle and working class and high level of unemployment is a primitive ahistorical view. Fascism is the ultimate capitalist statism acting simultaneously as the civil religion for the population also enforced by the power of the state. It has a lot of common with neoliberalism, that's why neoliberalism is sometimes called "inverted totalitarism".
In reality fascism while remaining the dictatorship of capitalists for capitalist and the national part of financial oligarchy, it like neoliberalism directed against working class fascism comes to power on the populist slogans of righting wrong by previous regime and kicking foreign capitalists and national compradors (which in Germany turned to be mostly Jewish) out.
It comes to power under the slogans of stopping the distribution of wealth up and elimination of the class of reinters -- all citizens should earn income, not get it from bond and other investments (often in reality doing completely the opposite).
While intrinsically connected and financed by a sizable part of national elite which often consist of far right military leadership, a part of financial oligarchy and large part of lower middle class (small properties) is is a protest movement which want to revenge for the humiliation and prefer military style organization of the society to democracy as more potent weapon to achieve this goal.
Like any far right movement the rise of fascism and neo-fascism is a sign of internal problem within a given society, often a threat to the state or social order.
Apr 04, 2019 |

Still another noted that Fascism is often linked to people who are part of a distinct ethnic or racial group, who are under economic stress, and who feel that they are being denied rewards to which they are entitled. "It's not so much what people have." she said, "but what they think they should have -- and what they fear." Fear is why Fascism's emotional reach can extend to all levels of society. No political movement can flourish without popular support, but Fascism is as dependent on the wealthy and powerful as it is on the man or woman in the street -- on those who have much to lose and those who have nothing at all.

This insight made us think that Fascism should perhaps be viewed less as a political ideology than as a means for seizing and holding power. For example, Italy in the 1920s included self-described Fascists of the left (who advocated a dictatorship of the dispossessed), of the right (who argued for an authoritarian corporatist state), and of the center (who sought a return to absolute monarchy). The German National Socialist Party (the

Nazis) originally came together ar ound a list of demands that ca- tered to anti-Semites, anti-immigrants, and anti-capitalists but also advocated for higher old-age pensions, more educational op- portunities for the poor, an end to child labor, and improved ma- ternal health care. The Nazis were racists and, in their own minds, reformers at the same time.

If Fascism concerns itself less with specific policies than with finding a pathway to power, what about the tactics of lead- ership? My students remarked that the Fascist chiefs we remem- ber best were charismatic. Through one method or another, each established an emotional link to the crowd and, like the central figure in a cult, brought deep and often ugly feelings to the sur- face. This is how the tentacles of Fascism spread inside a democ- racy. Unlike a monarchy or a military dictatorship imposed on society from above. Fascism draws energy from men and women who are upset because of a lost war, a lost job, a memory of hu- miliation, or a sense that their country is in steep decline. The more painful the grounds for resentment, the easier it is for a Fascist leader to gam followers by dangling the prospect of re- newal or by vowing to take back what has been stolen.

Like the mobilizers of more benign movements, these secular evangelists exploit the near-universal human desire to be part of a meaningful quest. The more gifted among them have an apti- tude for spectacle -- for orchestrating mass gatherings complete with martial music, incendiary rhetoric, loud cheers, and arm-

lifting salutes. To loyalists, they offer the prize of membership in a club from which others, often the objects of ridicule, are kept out. To build fervor, Fascists tend to be aggressive, militaristic, and -- when circumstances allow -- expansionist. To secure the future, they turn schools into seminaries for true believers, striv- ing to produce "new men" and "new women" who will obey without question or pause. And, as one of my students observed, "a Fascist who launches his career by being voted into office will have a claim to legitimacy that others do not."

After climbing into a position of power, what comes next: How does a Fascist consolidate authority? Here several students piped up: "By controlling information." Added another, "And that's one reason we have so much cause to worry today." Most of us have thought of the technological revolution primarily as a means for people from different walks of life to connect with one another, trade ideas, and develop a keener understanding of why men and women act as they do -- in other words, to sharpen our perceptions of truth. That's still the case, but now we are not so sure. There is a troubling "Big Brother" angle because of the mountain of personal data being uploaded into social media. If an advertiser can use that information to home in on a consumer because of his or her individual interests, what's to stop a Fascist government from doing the same? "Suppose I go to a demonstra- tion like the Women's March," said a student, "and post a photo

on social media. My name gets added to a list and that list can end up anywhere. How do we protect ourselves against that?"

Even more disturbing is the ability shown by rogue regimes and their agents to spread lies on phony websites and Facebook. Further, technology has made it possible for extremist organiza- tions to construct echo chambers of support for conspiracy theo- ries, false narratives, and ignorant views on religion and race. This is the first rule of deception: repeated often enough, almost any statement, story, or smear can start to sound plausible. The Internet should be an ally of freedom and a gateway to knowledge; in some cases, it is neither.

Historian Robert Paxton begins one of his books by assert- ing: "Fascism was the major political innovation of the twentieth century, and the source of much of its pain." Over the years, he and other scholars have developed lists of the many moving parts that Fascism entails. Toward the end of our discussion, my class sought to articulate a comparable list.

Fascism, most of the students agreed, is an extreme form of authoritarian rule. Citizens are required to do exactly what lead- ers say they must do, nothing more, nothing less. The doctrine is linked to rabid nationalism. It also turns the traditional social contract upside down. Instead of citizens giving power to the state in exchange for the protection of their rights, power begins with the leader, and the people have no rights. Under Fascism,

the mission of citizens is to serve; the government's job is to rule.

When one talks about this subject, confusion often arises about the difference between Fascism and such related concepts as totalitarianism, dictatorship, despotism, tyranny, autocracy, and so on. As an academic, I might be tempted to wander into that thicket, but as a former diplomat, I am primarily concerned with actions, not labels. To my mind, a Fascist is someone who identifies strongly with and claims to speak for a whole nation or group, is unconcerned with the rights of others, and is willing to use whatever means are necessary -- including violence -- to achieve his or her goals. In that conception, a Fascist will likely be a tyrant, but a tyrant need not be a Fascist.

Often the difference can be seen in who is trusted with the guns. In seventeenth-century Europe, when Catholic aristocrats did battle with Protestant aristocrats, they fought over scripture but agreed not to distribute weapons to their peasants, thinking it safer to wage war with mercenary armies. Modern dictators also tend to be wary of their citizens, which is why they create royal guards and other elite security units to ensure their personal safe- ty. A Fascist, however, expects the crowd to have his back. Where kings try to settle people down, Fascists stir them up so that when the fighting begins, their foot soldiers have the will and the firepower to strike first.

petarsimic , October 21, 2018

Madeleine Albright on million Iraqis dead: "We think the price is worth It"

Hypocrisy at its worst from a lady who advocated hawkish foreign policy which included the most sustained bombing campaign since Vietnam, when, in 1998, Clinton began almost daily attacks on Iraq in the so-called no-fly zones, and made so-called regime change in Iraq official U.S. policy.

In May of 1996, 60 Minutes aired an interview with Madeleine Albright, who at the time was Clinton's U.N. ambassador. Correspondent Leslie Stahl said to Albright, in connection with the Clinton administration presiding over the most devastating regime of sanctions in history that the U.N. estimated took the lives of as many as a million Iraqis, the vast majority of them children. , "We have heard that a half-million children have died. I mean, that's more children than died in Hiroshima. And -- and, you know, is the price worth it?"

Madeleine Albright replied, "I think this is a very hard choice, but the price -- we think the price is worth it.

<img src=",0,1024,1024_SX48_.png"> P. Bierre , June 11, 2018
Does Albright present a comprehensive enough understanding of fascism to instruct on how best to avoid it?

While I found much of the story-telling in "Fascism" engaging, I come away expecting much more of one of our nation's pre-eminent senior diplomats . In a nutshell, she has devoted a whole volume to describing the ascent of intolerant fascism and its many faces, but punted on the question "How should we thwart fascism going forward?"

Even that question leaves me a bit unsatisfied, since it is couched in double-negative syntax. The thing there is an appetite for, among the readers of this book who are looking for more than hand-wringing about neofascism, is a unifying title or phrase which captures in single-positive syntax that which Albright prefers over fascism. What would that be? And, how do we pursue it, nurture it, spread it and secure it going forward? What is it?

I think Albright would perhaps be willing to rally around "Good Government" as the theme her book skirts tangentially from the dark periphery of fascistic government. "Virtuous Government"? "Effective Government"? "Responsive Government"?

People concerned about neofascism want to know what we should be doing right now to avoid getting sidetracked into a dark alley of future history comparable to the Nazi brown shirt or Mussolini black shirt epochs. Does Albright present a comprehensive enough understanding of fascism to instruct on how best to avoid it? Or, is this just another hand-wringing exercise, a la "you'll know it when you see it", with a proactive superficiality stuck at the level of pejorative labelling of current styles of government and national leaders? If all you can say is what you don't want, then the challenge of threading the political future of the US is left unruddered. To make an analogy to driving a car, if you don't know your destination, and only can get navigational prompts such as "don't turn here" or "don't go down that street", then what are the chances of arriving at a purposive destination?

The other part of this book I find off-putting is that Albright, though having served as Secretary of State, never talks about the heavy burden of responsibility that falls on a head of state. She doesn't seem to empathize at all with the challenge of top leadership. Her perspective is that of the detached critic. For instance, in discussing President Duterte of the Philippines, she fails to paint the dire situation under which he rose to national leadership responsibility: Islamic separatists having violently taken over the entire city of Marawi, nor the ubiquitous spread of drug cartel power to the level where control over law enforcement was already ceded to the gangs in many places...entire islands and city neighborhoods run by mafia organizations. It's easy to sit back and criticize Duterte's unleashing of vigilante justice -- What was Mrs. Albright's better alternative to regain ground from vicious, well-armed criminal organizations? The distancing from leadership responsibility makes Albright's treatment of the Philippines twin crises of gang-rule and Islamist revolutionaries seem like so much academic navel-gazing....OK for an undergrad course at Georgetown maybe, but unworthy of someone who served in a position of high responsibility. Duterte is liked in the Philippines. What he did snapped back the power of the cartels, and returned a deserved sense of security to average Philippinos (at least those not involved with narcotics). Is that not good government, given the horrendous circumstances Duterte came up to deal with? What lack of responsibility in former Philippine leadership allowed things to get so out of control? Is it possible that Democrats and liberals are afraid to be tough, when toughness is what is needed? I'd much rather read an account from an average Philippino about the positive impacts of the vigilante campaign, than listen of Madame Secretary sermonizing out of context about Duterte. OK, he's not your idea of a nice guy. Would you rather sit back, prattle on about the rule of law and due process while Islamic terrorists wrest control over where you live? Would you prefer the leadership of a drug cartel boss to Duterte?

My critique is offered in a constructive manner. I would certainly encourage Albright (or anyone!) to write a book in a positive voice about what it's going to take to have good national government in the US going forward, and to help spread such abundance globally. I would define "good" as the capability to make consistently good policy decisions, ones that continue to look good in hindsight, 10, 20 or 30 years later. What does that take?

I would submit that the essential "preserving democracy" process component is having a population that is adequately prepared for collaborative problem-solving. Some understanding of history is helpful, but it's simply not enough. Much more essential is for every young person to experience team problem-solving, in both its cooperative and competitive aspects. Every young person needs to experience a team leadership role, and to appreciate what it takes from leaders to forge constructive design from competing ideas and champions. Only after serving as a referee will a young person understand the limits to "passion" that individual contributors should bring to the party. Only after moderating and herding cats will a young person know how to interact productively with leaders and other contributors. Much of the skill is counter-instinctual. It's knowing how to express to field to nudge people along in the desired direction...and how to avoid ad-hominem attacks, exaggerations, accusations and speculative grievances. It's learning how to manage conflict productively toward excellence. Way too few of our young people are learning these skills, and way too few of our journalists know how to play a constructive role in managing communications toward successful complex problem-solving. Albright's claim that a journalist's job is primarily to "hold leaders accountable" really betrays an absolving of responsibility for the media as a partner in good government -- it doesn't say whether the media are active players on the problem-solving team (which they have to be for success), or mere spectators with no responsibility for the outcome. If the latter, then journalism becomes an irritant, picking at the scabs over and over, but without any forward progress. When the media takes up a stance as an "opponent" of leadership, you end up with poor problem-solving results....the system is fighting itself instead of making forward progress.

"Fascism" doesn't do nearly enough to promote the teaching of practical civics 101 skills, not just to the kids going into public administration, but to everyone. For, it is in the norms of civility, their ability to be practiced, and their defense against excesses, that fascism (e.g., Antifa) is kept at bay.
Everyone in a democracy has to know the basics:
• when entering a disagreement, don't personalize it
• never demonize an opponent
• keep a focus on the goal of agreement and moving forward
• never tell another person what they think, but ask (non-rhetorically) what they think then be prepared to listen and absorb
• do not speak untruths or exaggerate to make an argument
• do not speculate grievance
• understand truth gathering as a process; detect when certainty is being bluffed; question sources
• recognize impasse and unproductive argumentation and STOP IT
• know how to introduce a referee or moderator to regain productive collaboration
• avoid ad hominem attacks
• don't take things personally that wrankle you;
• give the benefit of the doubt in an ambiguous situation
• don't jump to conclusions
• don't reward theatrical manipulation

These basics of collaborative problem-solving are the guts of a "liberal democracy" that can face down the most complex challenges and dilemmas.

I gave the book 3 stars for the great story-telling, and Albright has been part of a great story of late 20th century history. If she would have told us how to prevent fascism going forward, and how to roll it back in "hard case" countries like North Korea and Sudan, I would have given her a 5. I'm not that interested in picking apart the failure cases of history...they teach mostly negative exemplars. Much rather I would like to read about positive exemplars of great national government -- "great" defined by popular acclaim, by the actual ones governed. Where are we seeing that today? Canada? Australia? Interestingly, both of these positive exemplars have strict immigration policies.

Is it possible that Albright is just unable, by virtue of her narrow escape from Communist Czechoslovakia and acceptance in NYC as a transplant, to see that an optimum immigration policy in the US, something like Canada's or Australia's, is not the looming face of fascism, but rather a move to keep it safely in its corner in coming decades? At least, she admits to her being biased by her life story.

That suggests her views on refugees and illegal immigrants as deserving of unlimited rights to migrate into the US might be the kind of cloaked extremism that she is warning us about.

Anat Hadad , January 19, 2019
"Fascism is not an exception to humanity, but part of it."

Albright's book is a comprehensive look at recent history regarding the rise and fall of fascist leaders; as well as detailing leaders in nations that are starting to mimic fascist ideals. Instead of a neat definition, she uses examples to bolster her thesis of what are essential aspects of fascism. Albright dedicates each section of the book to a leader or regime that enforces fascist values and conveys this to the reader through historical events and exposition while also peppering in details of her time as Secretary of State. The climax (and 'warning'), comes at the end, where Albright applies what she has been discussing to the current state of affairs in the US and abroad.

Overall, I would characterize this as an enjoyable and relatively easy read. I think the biggest strength of this book is how Albright uses history, previous examples of leaders and regimes, to demonstrate what fascism looks like and contributing factors on a national and individual level. I appreciated that she lets these examples speak for themselves of the dangers and subtleties of a fascist society, which made the book more fascinating and less of a textbook. Her brief descriptions of her time as Secretary of State were intriguing and made me more interested in her first book, 'Madame Secretary'. The book does seem a bit slow as it is not until the end that Albright blatantly reveals the relevance of all of the history relayed in the first couple hundred pages. The last few chapters are dedicated to the reveal: the Trump administration and how it has affected global politics. Although, she never outright calls Trump a fascist, instead letting the reader decide based on his decisions and what you have read in the book leading up to this point, her stance is quite clear by the end. I was surprised at what I shared politically with Albright, mainly in immigration and a belief of empathy and understanding for others. However, I got a slight sense of anti-secularism in the form of a disdain for those who do not subscribe to an Abrahamic religion and she seemed to hint at this being partly an opening to fascism.

I also could have done without the both-sides-ism she would occasionally push, which seems to be a tactic used to encourage people to 'unite against Trump'. These are small annoyances I had with the book, my main critique is the view Albright takes on democracy. If anything, the book should have been called "Democracy: the Answer" because that is the most consistent stance Albright takes throughout. She seems to overlook many of the atrocities the US and other nations have committed in the name of democracy and the negative consequences of capitalism, instead, justifying negative actions with the excuse of 'it is for democracy and everyone wants that' and criticizing those who criticize capitalism.

She does not do a good job of conveying the difference between a communist country like Russia and a socialist country like those found in Scandinavia and seems okay with the idea of the reader lumping them all together in a poor light. That being said, I would still recommend this book for anyone's TBR as the message is essential for today, that the current world of political affairs is, at least somewhat, teetering on a precipice and we are in need of as many strong leaders as possible who are willing to uphold democratic ideals on the world stage and mindful constituents who will vote them in.

Matthew T , May 29, 2018
An easy read, but incredibly ignorant and one eyed in far too many instances

The book is very well written, easy to read, and follows a pretty standard formula making it accessible to the average reader. However, it suffers immensely from, what I suspect are, deeply ingrained political biases from the author.

Whilst I don't dispute the criteria the author applies in defining fascism, or the targets she cites as examples, the first bias creeps in here when one realises the examples chosen are traditional easy targets for the US (with the exception of Turkey). The same criteria would define a country like Singapore perfectly as fascist, yet the country (or Malaysia) does not receive a mention in the book.

Further, it grossly glosses over what Ms. Albright terms facist traits from the US governments of the past. If the author is to be believed, the CIA is holier than thou, never intervened anywhere or did anything that wasn't with the best interests of democracy at heart, and American foreign policy has always existed to build friendships and help out their buddies. To someone ingrained in this rhetoric for years I am sure this is an easy pill to swallow, but to the rest of the world it makes a number of assertions in the book come across as incredibly naive. out of 5 stars Trite and opaque

Avid reader , December 20, 2018
Biast much? Still a good start into the problem

We went with my husband to the presentation of this book at UPenn with Albright before it came out and Madeleine's spunk, wit and just glorious brightness almost blinded me. This is a 2.5 star book, because 81 year old author does not really tell you all there is to tell when she opens up on a subject in any particular chapter, especially if it concerns current US interest.

Lets start from the beginning of the book. What really stood out, the missing 3rd Germany ally, Japan and its emperor. Hirohito (1901-1989) was emperor of Japan from 1926 until his death in 1989. He took over at a time of rising democratic sentiment, but his country soon turned toward ultra-nationalism and militarism. During World War II (1939-45), Japan attacked nearly all of its Asian neighbors, allied itself with Nazi Germany and launched a surprise assault on the U.S. naval base at Pearl Harbor, forcing US to enter the war in 1941. Hirohito was never indicted as a war criminal! does he deserve at least a chapter in her book?

Oh and by the way, did author mention anything about sanctions against Germany for invading Austria, Czechoslovakia, Romania and Poland? Up until the Pearl Harbor USA and Germany still traded, although in March 1939, FDR slapped a 25% tariff on all German goods. Like Trump is doing right now to some of US trading partners.

Next monster that deserves a chapter on Genocide in cosmic proportions post WW2 is communist leader of China Mao Zedung. Mr Dikötter, who has been studying Chinese rural history from 1958 to 1962, when the nation was facing a famine, compared the systematic torture, brutality, starvation and killing of Chinese peasants compares to the Second World War in its magnitude. At least 45 million people were worked, starved or beaten to death in China over these four years; the total worldwide death toll of the Second World War was 55 million.

We learn that Argentina has given sanctuary to Nazi war criminals, but she forgets to mention that 88 Nazi scientists arrived in the United States in 1945 and were promptly put to work. For example, Wernher von Braun was the brains behind the V-2 rocket program, but had intimate knowledge of what was going on in the concentration camps. Von Braun himself hand-picked people from horrific places, including Buchenwald concentration camp. Tsk-Tsk Madeline.

What else? Oh, lets just say that like Madelaine Albright my husband is Jewish and lost extensive family to Holocoust. Ukrainian nationalists executed his great grandfather on gistapo orders, his great grandmother disappeared in concentration camp, grandfather was conscripted in june 1940 and decommissioned september 1945 and went through war as infantryman through 3 fronts earning several medals. his grandmother, an ukrainian born jew was a doctor in a military hospital in Saint Petersburg survived famine and saved several children during blockade. So unlike Maideline who was raised as a Roman Catholic, my husband grew up in a quiet jewish family in that territory that Stalin grabbed from Poland in 1939, in a polish turn ukrainian city called Lvov(Lemberg). His family also had to ask for an asylum, only they had to escape their home in Ukraine in 1991. He was told then "You are a nice little Zid (Jew), we will kill you last" If you think things in ukraine changed, think again, few weeks ago in Kiev Roma gypsies were killed and injured during pogroms, and nobody despite witnesses went to jail. Also during demonstrations openly on the streets C14 unit is waving swastikas and Heils. Why is is not mentioned anywhere in the book? is is because Hunter Biden sits on the board of one of Ukraine's largest natural gas companies called Burisma since May 14, 2014, and Ukraine has an estimated 127.9 trillion cubic feet of unproved technically recoverable shale gas resources? ( according to the U.S. Energy Information Administration (EIA).1 The most promising shale reserves appear to be in the Carpathian Foreland Basin (also called the Lviv-Volyn Basin), which extends across Western Ukraine from Poland into Romania, and the Dnieper-Donets Basin in the East (which borders Russia).
Wow, i bet you did not know that. how ugly are politics, even this book that could have been so much greater if the author told the whole ugly story. And how scary that there are countries where you can go and openly be fascist.

&amp;amp;amp;amp;lt;img src=",0.0,333,333_SX48_.jpg"&amp;amp;amp;amp;gt; NJ , February 3, 2019
Interesting...yes. Useful...hmmm

To me, Fascism fails for the single reason that no two fascist leaders are alike. Learning about one or a few, in a highly cursory fashion like in this book or in great detail, is unlikely to provide one with any answers on how to prevent the rise of another or fend against some such. And, as much as we are witnessing the rise of numerous democratic or quasi-democratic "strongmen" around the world in global politics, it is difficult to brand any of them as fascist in the orthodox sense.

As the author writes at the outset, it is difficult to separate a fascist from a tyrant or a dictator. A fascist is a majoritarian who rouses a large group under some national, racial or similar flag with rallying cries demanding suppression or exculcation of those excluded from this group. A typical fascist leader loves her yes-men and hates those who disagree: she does not mind using violence to suppress dissidents. A fascist has no qualms using propaganda to popularize the agreeable "facts" and theories while debunking the inconvenient as lies. What is not discussed explicitly in the book are perhaps some positive traits that separate fascists from other types of tyrants: fascists are rarely lazy, stupid or prone to doing things for only personal gains. They differ from the benevolent dictators for their record of using heavy oppression against their dissidents. Fascists, like all dictators, change rules to suit themselves, take control of state organizations to exercise total control and use "our class is the greatest" and "kick others" to fuel their programs.

Despite such a detailed list, each fascist is different from each other. There is little that even Ms Albright's fascists - from Mussolini and Hitler to Stalin to the Kims to Chavez or Erdogan - have in common. In fact, most of the opponents of some of these dictators/leaders would calll them by many other choice words but not fascists. The circumstances that gave rise to these leaders were highly different and so were their rules, methods and achievements.

The point, once again, is that none of the strongmen leaders around the world could be easily categorized as fascists. Or even if they do, assigning them with such a tag and learning about some other such leaders is unlikely to help. The history discussed in the book is interesting but disjointed, perfunctory and simplistic. Ms Albright's selection is also debatable.

Strong leaders who suppress those they deem as opponents have wreaked immense harms and are a threat to all civil societies. They come in more shades and colours than terms we have in our vocabulary (dictators, tyrants, fascists, despots, autocrats etc). A study of such tyrant is needed for anyone with an interest in history, politics, or societal well-being. Despite Ms Albright's phenomenal knowledge, experience, credentials, personal history and intentions, this book is perhaps not the best place to objectively learn much about the risks from the type of things some current leaders are doing or deeming as right.

Gderf , February 15, 2019
Wrong warning

Each time I get concerned about Trump's rhetoric or past actions I read idiotic opinions, like those of our second worst ever Secretary of State, and come to appreciate him more. Pejorative terms like fascism or populism have no place in a rational policy discussion. Both are blatant attempts to apply a pejorative to any disagreeing opinion. More than half of the book is fluffed with background of Albright, Hitler and Mussolini. Wikipedia is more informative. The rest has snippets of more modern dictators, many of whom are either socialists or attained power through a reaction to failed socialism, as did Hitler. She squirms mightily to liken Trump to Hitler. It's much easier to see that Sanders is like Maduro. The USA is following a path more like Venezuela than Germany.

Her history misses that Mussolini was a socialist before he was a fascist, and Nazism in Germany was a reaction to Wiemar socialism. The danger of fascism in the US is far greater from the left than from the right. America is far left of where the USSR ever was. Remember than Marx observed that Russia was not ready for a proletarian revolution. The USA with ready made capitalism for reform fits Marx's pattern much better. Progressives deny that Sanders and Warren are socialists. If not they are what Lenin called "useful idiots."
Albright says that she is proud of the speech where she called the USA the 'Indispensable Nation.' She should be ashamed. Obama followed in his inaugural address, saying that we are "the indispensable nation, responsible for world security." That turned into a policy of human rights interventions leading to open ended wars (Syria, Yemen), nations in chaos (Libya), and distrust of the USA (Egypt, Russia, Turkey, Tunisia, Israel, NK). Trump now has to make nice with dictators to allay their fears that we are out to replace them.
She admires the good intentions of human rights intervention, ignoring the results. She says Obama had some success without citing a single instance. He has apologized for Libya, but needs many more apologies. She says Obama foreign policy has had some success, with no mention of a single instance. Like many progressives, she confuses good intentions with performance. Democracy spreading by well intentioned humanitarian intervention has resulted in a succession of open ended war or anarchy.

The shorter histories of Czechoslovakia, Yugoslavia and Venezuela are much more informative, although more a warning against socialism than right wing fascism. Viktor Orban in Hungary is another reaction to socialism.

Albright ends the book with a forlorn hope that we need a Lincoln or Mandela, exactly what our two party dictatorship will not generate as it yields ever worse and worse candidates for our democracy to vote upon, even as our great society utopia generates ever more power for weak presidents to spend our money and continue wrong headed foreign policy.

The greatest danger to the USA is not fascism, but of excessively poor leadership continuing our slow slide to the bottom.

[Apr 02, 2019] Mr Cohen and I Live on Different Planets

Apr 02, 2019 |

Looks like the reviewer is a typical neocon. Typical neocon think tanks talking points that reflect the "Full Spectrum Dominance" agenda.

As for Ukraine yes, of course, Victoria Nuland did not interfere with the event, did not push for the deposing Yanukovich to spoil agreement reached between him and the EU diplomats ("F**k EU" as this high level US diplomat eloquently expressed herself) and to appoint the US stooge Yatsenyuk. The transcript of Nuland's phone call actually introduced many Americans to the previously obscure Yatsenyuk.

And the large amount of cash confiscated in the Kiev office of Yulia Timostchenko Batkivshchina party (the main opposition party at the time run by Yatsenyuk, as Timoshenko was in jail) was just a hallucination. It has nothing to do with ";bombing with dollars"; -- yet another typical color revolution trick.

BTW "government snipers of rooftops" also is a standard false flag operation used to instill uprising at the critical moment of the color revolution. Ukraine was not the first and is not the last. One participant recently confessed. The key person in this false flag operation was the opposition leader Andriy Parubiy -- who was responsible for the security of the opposition camp. Google "Parubiy and snipergate" for more information.

His view on DNC hack (which most probably was a leak) also does not withstand close scrutiny. William Binney, a former National Security Agency high level official who co-authored an analysis of a group of former intelligence professionals thinks that this was a transfer to the local USB drive as the speed of downloads was too high for the Internet connection. In this light the death of Seth Rich looks very suspicious indeed.

As for Russiagate, he now needs to print his review and the portrait of Grand Wizard of Russiagate Rachel Maddow, shed both of them and eat with Borscht ;-)

[Apr 01, 2019] War with Russia From Putin Ukraine to Trump Russiagate (9781510745810) Stephen F. Cohen Books

Highly recommended!
Important book. Kindle sample
Notable quotes:
"... Washington has made many policies strongly influenced by' the demonizing of Putin -- a personal vilification far exceeding any ever applied to Soviet Russia's latter-day Communist leaders. ..."
"... As with all institutions, the demonization of Putin has its own history'. When he first appeared on the world scene as Boris Yeltsin's anointed successor, in 1999-2000, Putin was welcomed by' leading representatives of the US political-media establishment. The New York Times ' chief Moscow correspondent and other verifiers reported that Russia's new leader had an "emotional commitment to building a strong democracy." Two years later, President George W. Bush lauded his summit with Putin and "the beginning of a very' constructive relationship."' ..."
"... But the Putin-friendly narrative soon gave away to unrelenting Putin-bashing. In 2004, Times columnist Nicholas Kristof inadvertently explained why, at least partially. Kristof complained bitterly' of having been "suckered by' Mr. Putin. He is not a sober version of Boris Yeltsin." By 2006, a Wall Street Journal editor, expressing the establishment's revised opinion, declared it "time we start thinking of Vladimir Putin's Russia as an enemy of the United States." 10 , 11 The rest, as they' say, is history'. ..."
"... In America and elsewhere in the West, however, only purported "minuses" reckon in the extreme vilifying, or anti-cult, of Putin. Many are substantially uninformed, based on highly selective or unverified sources, and motivated by political grievances, including those of several Yeltsin-era oligarchs and their agents in the West. ..."
"... Putin is not the man who, after coming to power in 2000, "de-democratized" a Russian democracy established by President Boris Yeltsin in the 1990s and restored a system akin to Soviet "totalitarianism." ..."
"... Nor did Putim then make himself a tsar or Soviet-like autocrat, which means a despot with absolute power to turn his will into policy, the last Kremlin leader with that kind of power was Stalin, who died in 1953, and with him his 20-year mass terror. ..."
"... Putin is not a Kremlin leader who "reveres Stalin" and whose "Russia is a gangster shadow of Stalin's Soviet Union." 13 , 14 These assertions are so far-fetched and uninfoimed about Stalin's terror-ridden regime, Putin, and Russia today, they barely warrant comment. ..."
"... Nor did Putin create post-Soviet Russia's "kleptocratic economic system," with its oligarchic and other widespread corruption. This too took shape under Yeltsin during the Kremlin's shock-therapy "privatization" schemes of the 1990s, when the "swindlers and thieves" still denounced by today's opposition actually emerged. ..."
"... Which brings us to the most sinister allegation against him: Putin, trained as "a KGB thug," regularly orders the killing of inconvenient journalists and personal enemies, like a "mafia state boss." ..."
"... More recently, there is yet another allegation: Putin is a fascist and white supremacist. The accusation is made mostly, it seems, by people wishing to deflect attention from the role being played by neo-Nazis in US-backed Ukraine. ..."
"... Finally, at least for now. there is the ramifying demonization allegation that, as a foreign-policy leader. Putin has been exceedingly "aggressive" abroad and his behavior has been the sole cause of the new cold war. ..."
"... Embedded in the "aggressive Putin" axiom are two others. One is that Putin is a neo-Soviet leader who seeks to restore the Soviet Union at the expense of Russia's neighbors. Fie is obsessively misquoted as having said, in 2005, "The collapse of the Soviet Union was the greatest geopolitical catastrophe of the twentieth century," apparently ranking it above two World Wars. What he actually said was "a major geopolitical catastrophe of the twentieth century," as it was for most Russians. ..."
"... The other fallacious sub-axiom is that Putin has always been "anti-Western," specifically "anti-American," has "always viewed the United States" with "smoldering suspicions." -- so much that eventually he set into motion a "Plot Against America." ..."
"... Or, until he finally concluded that Russia would never be treated as an equal and that NATO had encroached too close, Putin was a full partner in the US-European clubs of major world leaders? Indeed, as late as May 2018, contrary to Russiagate allegations, he still hoped, as he had from the beginning, to rebuild Russia partly through economic partnerships with the West: "To attract capital from friendly companies and countries, we need good relations with Europe and with the whole world, including the United States." 3 " ..."
"... A few years earlier, Putin remarkably admitted that initially he had "illusions" about foreign policy, without specifying which. Perhaps he meant this, spoken at the end of 2017: "Our most serious mistake in relations with the West is that we trusted you too much. And your mistake is that you took that trust as weakness and abused it." 34 ..."
"... <img src=",0,1024,1024_SX48_.png"> P. Philips ..."
"... "In a Time of Universal Deceit -- Telling the Truth Is a Revolutionary Act" ..."
"... Professor Cohen is indeed a patriot of the highest order. The American and "Globalists" elites, particularly the dysfunctional United Kingdom, are engaging in a war of nerves with Russia. This war, which could turn nuclear for reasons discussed in this important book, is of no benefit to any person or nation. ..."
"... If you are a viewer of one of the legacy media outlets, be it Cable Television networks, with the exception of Tucker Carlson on Fox who has Professor Cohen as a frequent guest, or newspapers such as The New York Times, you have been exposed to falsehoods by remarkably ignorant individuals; ignorant of history, of the true nature of Russia (which defeated the Nazis in Europe at a loss of millions of lives) and most important, of actual military experience. America is neither an invincible or exceptional nation. And for those familiar with terminology of ancient history, it appears the so-called elites are suffering from hubris. ..."
Apr 01, 2019 |

THE SPECTER OF AN EVIL-DOING VLADIMIR PUTIN HAS loomed over and undermined US thinking about Russia for at least a decade. Inescapably, it is therefore a theme that runs through this book. Henry' Kissinger deserves credit for having warned, perhaps alone among prominent American political figures, against this badly distorted image of Russia's leader since 2000: "The demonization of Vladimir Putin is not a policy. It is an alibi for not having one." 4

But Kissinger was also wrong. Washington has made many policies strongly influenced by' the demonizing of Putin -- a personal vilification far exceeding any ever applied to Soviet Russia's latter-day Communist leaders. Those policies spread from growing complaints in the early 2000s to US- Russian proxy wars in Georgia, Ukraine, Syria, and eventually even at home, in Russiagate allegations. Indeed, policy-makers adopted an earlier formulation by the late Senator .Tolm McCain as an integral part of a new and more dangerous Cold War: "Putin [is] an unreconstructed Russian imperialist and K.G.B. apparatchik.... His world is a brutish, cynical place.... We must prevent the darkness of Mr. Putin's world from befalling more of humanity'." 3

Mainstream media outlets have play'ed a major prosecutorial role in the demonization. Far from aty'pically', the Washington Post's editorial page editor wrote, "Putin likes to make the bodies bounce.... The rule-by-fear is Soviet, but this time there is no ideology -- only a noxious mixture of personal aggrandizement, xenophobia, homophobia and primitive anti-Americanism." 6 Esteemed publications and writers now routinely degrade themselves by competing to denigrate "the flabbily muscled form" of the "small gray ghoul named Vladimir Putin." 7 , 8 There are hundreds of such examples, if not more, over many years. Vilifying Russia's leader has become a canon in the orthodox US narrative of the new Cold War.

As with all institutions, the demonization of Putin has its own history'. When he first appeared on the world scene as Boris Yeltsin's anointed successor, in 1999-2000, Putin was welcomed by' leading representatives of the US political-media establishment. The New York Times ' chief Moscow correspondent and other verifiers reported that Russia's new leader had an "emotional commitment to building a strong democracy." Two years later, President George W. Bush lauded his summit with Putin and "the beginning of a very' constructive relationship."'

But the Putin-friendly narrative soon gave away to unrelenting Putin-bashing. In 2004, Times columnist Nicholas Kristof inadvertently explained why, at least partially. Kristof complained bitterly' of having been "suckered by' Mr. Putin. He is not a sober version of Boris Yeltsin." By 2006, a Wall Street Journal editor, expressing the establishment's revised opinion, declared it "time we start thinking of Vladimir Putin's Russia as an enemy of the United States." 10 , 11 The rest, as they' say, is history'.

Who has Putin really been during his many years in power? We may' have to leave this large, complex question to future historians, when materials for full biographical study -- memoirs, archive documents, and others -- are available. Even so, it may surprise readers to know that Russia's own historians, policy intellectuals, and journalists already argue publicly and differ considerably as to the "pluses and minuses" of Putin's leadership. (My own evaluation is somewhere in the middle.)

In America and elsewhere in the West, however, only purported "minuses" reckon in the extreme vilifying, or anti-cult, of Putin. Many are substantially uninformed, based on highly selective or unverified sources, and motivated by political grievances, including those of several Yeltsin-era oligarchs and their agents in the West.

By identifying and examining, however briefly, the primary "minuses" that underpin the demonization of Putin, we can understand at least who he is not:

Embedded in the "aggressive Putin" axiom are two others. One is that Putin is a neo-Soviet leader who seeks to restore the Soviet Union at the expense of Russia's neighbors. Fie is obsessively misquoted as having said, in 2005, "The collapse of the Soviet Union was the greatest geopolitical catastrophe of the twentieth century," apparently ranking it above two World Wars. What he actually said was "a major geopolitical catastrophe of the twentieth century," as it was for most Russians.

Though often critical of the Soviet system and its two formative leaders, Lenin and Stalin, Putin, like most of his generation, naturally remains in part a Soviet person. But what he said in 2010 reflects his real perspective and that of very many other Russians: "Anyone who does not regret the break-up of the Soviet Union has no heart. Anyone who wants its rebirth in its previous form has no head." 28 , 29

The other fallacious sub-axiom is that Putin has always been "anti-Western," specifically "anti-American," has "always viewed the United States" with "smoldering suspicions." -- so much that eventually he set into motion a "Plot Against America." 30 , 31 A simple reading of his years in power tells us otherwise. A Westernized Russian, Putin came to the presidency in 2000 in the still prevailing tradition of Gorbachev and Yeltsin -- in hope of a "strategic friendship and partnership" with the United States.

How else to explain Putin's abundant assistant to US forces fighting in Afghanistan after 9/1 1 and continued facilitation of supplying American and NATO troops there? Or his backing of harsh sanctions against Iran's nuclear ambitions and refusal to sell Tehran a highly effective air-defense system? Or the information his intelligence services shared with Washington that if heeded could have prevented the Boston Marathon bombings in April 2012?

Or, until he finally concluded that Russia would never be treated as an equal and that NATO had encroached too close, Putin was a full partner in the US-European clubs of major world leaders? Indeed, as late as May 2018, contrary to Russiagate allegations, he still hoped, as he had from the beginning, to rebuild Russia partly through economic partnerships with the West: "To attract capital from friendly companies and countries, we need good relations with Europe and with the whole world, including the United States." 3 "

Given all that has happened during the past nearly two decades -- particularly what Putin and other Russian leaders perceive to have happened -- it would be remarkable if his views of the W^est, especially America, had not changed. As he remarked in 2018, "We all change." 33

A few years earlier, Putin remarkably admitted that initially he had "illusions" about foreign policy, without specifying which. Perhaps he meant this, spoken at the end of 2017: "Our most serious mistake in relations with the West is that we trusted you too much. And your mistake is that you took that trust as weakness and abused it." 34

P. Philips , December 6, 2018

"In a Time of Universal Deceit -- Telling the Truth Is a Revolutionary Act"

"In a Time of Universal Deceit -- Telling the Truth Is a Revolutionary Act" is a well known quotation (but probably not of George Orwell). And in telling the truth about Russia and that the current "war of nerves" is not in the interests of either the American People or national security, Professor Cohen in this book has in fact done a revolutionary act.

Like a denizen of Plato's cave, or being in the film the Matrix, most people have no idea what the truth is. And the questions raised by Professor Cohen are a great service in the cause of the truth. As Professor Cohen writes in his introduction To His Readers:

"My scholarly work -- my biography of Nikolai Bukharin and essays collected in Rethinking the Soviet Experience and Soviet Fates and Lost Alternatives, for example -- has always been controversial because it has been what scholars term "revisionist" -- reconsiderations, based on new research and perspectives, of prevailing interpretations of Soviet and post-Soviet Russian history. But the "controversy" surrounding me since 2014, mostly in reaction to the contents of this book, has been different -- inspired by usually vacuous, defamatory assaults on me as "Putin's No. 1 American Apologist," "Best Friend," and the like. I never respond specifically to these slurs because they offer no truly substantive criticism of my arguments, only ad hominem attacks. Instead, I argue, as readers will see in the first section, that I am a patriot of American national security, that the orthodox policies my assailants promote are gravely endangering our security, and that therefore we -- I and others they assail -- are patriotic heretics. Here too readers can judge."

Cohen, Stephen F.. War with Russia (Kindle Locations 131-139). Hot Books. Kindle Edition.

Professor Cohen is indeed a patriot of the highest order. The American and "Globalists" elites, particularly the dysfunctional United Kingdom, are engaging in a war of nerves with Russia. This war, which could turn nuclear for reasons discussed in this important book, is of no benefit to any person or nation.

Indeed, with the hysteria on "climate change" isn't it odd that other than Professor Cohen's voice, there are no prominent figures warning of the devastation that nuclear war would bring?

If you are a viewer of one of the legacy media outlets, be it Cable Television networks, with the exception of Tucker Carlson on Fox who has Professor Cohen as a frequent guest, or newspapers such as The New York Times, you have been exposed to falsehoods by remarkably ignorant individuals; ignorant of history, of the true nature of Russia (which defeated the Nazis in Europe at a loss of millions of lives) and most important, of actual military experience. America is neither an invincible or exceptional nation. And for those familiar with terminology of ancient history, it appears the so-called elites are suffering from hubris.

I cannot recommend Professor Cohen's work with sufficient superlatives; his arguments are erudite, clearly stated, supported by the facts and ultimately irrefutable. If enough people find Professor Cohen's work and raise their voices to their oblivious politicians and profiteers from war to stop further confrontation between Russia and America, then this book has served a noble purpose.

If nothing else, educate yourself by reading this work to discover what the *truth* is. And the truth is something sacred.

America and the world owe Professor Cohen a great debt. "Blessed are the peace makers..."

[Mar 31, 2019] George Nader (an adviser to the crown prince of Abu Dhab): Nobody would even waste a cup of coffee on him if it wasn't for who he was married to

Notable quotes:
"... She suggests, "Kushner was increasingly caught up in his own mythology. He was the president's son-in-law, so he apparently thought he was untouchable." (Pg. 114) She notes, "allowing Kushner to work in the administration broke with historical precedent, overruling a string of Justice Department memos that concluded it was illegal for presidents to appoint relatives as White House staff." (Pg. 119) ..."
"... She observes, "Those first few days were chaotic for almost everyone in the new administration. A frantic Reince Priebus would quickly discover that it was impossible to impose any kind of order in this White House, in large part because Trump didn't like order. What Trump liked was having people fight in front of him and then he'd make a decision, just like he'd made snap decisions when his children presented licensing deals for the Trump Organization. This kind of dysfunction enabled a 'floater' like Kushner, whose job was undefined, to weigh in on any topic in front of Trump and have far more influence than he would have had in a top-down hierarchy." (Pg. 125) ..."
Mar 31, 2019 |

Steven H Propp TOP 50 REVIEWER 5.0 out of 5 stars March 27, 2019


Author Vicky Ward wrote in the Prologue to this 2019 book, "Donald Trump was celebrating being sworn in as president And the whole world knew that his daughter and son-in-law were his most trusted advisers, ambassadors, and coconspirators. They were an attractive couple---extremely wealthy and, now, extraordinarily powerful. Ivanka looked like Cinderella Ivanka and her husband swept onto the stage, deftly deflecting attention from Donald Trump's clumsy moves, as she had done do often over the past twenty years. The crowd roared in approval They were now America's prince and princess."

She notes, "Jared Kushner learned about the company [his father's] he would later run. Jared was the firm's most sheltered trainee. On his summer vacations, he'd go to work at Kushner Companies construction sites, maybe painting a few walls, more often sitting and listening to music No one dared tell him this probably would not give him a deep understanding of the construction process. But Charlie [Jared's father] doggedly groomed his eldest son for greatness, seeing himself as a Jewish version of Joseph Kennedy " (Pg. 17-18)

She states, "Ivanka had to fight for her father's attention and her ultimate role as the chief heir in his real estate empire When Donald Trump divorced her mother, Ivana she would go out of her way to see more of her father, not less she'd call him during the day and to her delight, he'd always take her call. (Trump's relationship with the two sons he had with Ivana, Don Jr. and Eric, was not nearly so close for years.) 'She was always Daddy's little girl,' said a family friend." (Pg. 32-33) She adds, "As Ivanka matured, physically and emotionally, her father talked openly about how impressed he was with her appearance---a habit he has maintained to this day." (Pg. 35)

She recounts, "at a networking lunch thrown by a diamond heir Jared was introduced to Ivanka Jared and Ivanka quickly became an intriguing gossip column item. They seemed perfectly matched But after a year of dating, they split in part because Jared's parents were dismayed at the idea of their son marrying outside the faith Soon after, Ivanka agreed to convert to Judaism Trump was said to be discombobulated by the enormity of what his daughter had done. Trump, a Presbyterian, who strikes no one as particularly religious, was baffled by his daughter's conversion 'Why should my daughter convert to marry anyone?'" (Pg. 51-53)

She observes, "Ivanka Trump was critical in promoting her husband as the smoother, softer counterpart to his father's volatility.. they could both work a room, ask after people's children, talk without notes, occasionally fake a sense of humor And unlike her husband, she seemed to have a ready command of figures and a detail, working knowledge of all the properties she was involved in Ivanka seemed to control the marital relationship, but she also played the part of devoted, traditional Orthodox wife." (Pg. 70-71)

Of 2016, she states, "No one thought Kushner or Ivanka believed in Trump's populist platform. 'The two of them see this as a networking opportunity,' said a close associate. Because Kushner and Ivanka only fully immersed themselves in Trump's campaign once he became the presumptive Republican nominee they had to push to assert themselves with the campaign staff Kushner quickly got control of the campaign's budget, but he did not have as much authority as he would have liked." (Pg. 74-75) She adds, "Ivanka appeared thrilled by her husband's rising prominence in her father's campaign. It was a huge change from the days when Trump had made belittling jokes about him. If Don Jr. and Eric were irked by the new favorite in Trump's court, they did not show it publicly." (Pg. 85)

She points out, "Trump tweeted an image [Hillary with a backdrop of money and a Star of David] widely viewed as anti-Semitic an 'Observer' writer, criticized Kushner in his own newspaper for standing 'silent and smiling in the background' while Trump made 'repeated accidental winks' to white supremacists Kushner wrote a response [that] insisted that Trump was neither anti-Semitic nor a racist Not all of Kushner's relatives appreciated his efforts to cover Trump's pandering to white supremacists." (Pg. 86-87) Later, she adds, "U.S.-Israel relations was the one political issue anyone in the campaign ever saw Kushner get worked up about." (Pg. 96)

On election night, "Kushner was shocked that Trump never mentioned him in his speech and would later tell people he felt slighted. He was going to find a way to get Trump to notice him more. Ivanka would help him the couple would become known as a single, powerful entity: 'Javanka.'" (Pg. 101) She suggests, "Kushner was increasingly caught up in his own mythology. He was the president's son-in-law, so he apparently thought he was untouchable." (Pg. 114) She notes, "allowing Kushner to work in the administration broke with historical precedent, overruling a string of Justice Department memos that concluded it was illegal for presidents to appoint relatives as White House staff." (Pg. 119)

She observes, "Those first few days were chaotic for almost everyone in the new administration. A frantic Reince Priebus would quickly discover that it was impossible to impose any kind of order in this White House, in large part because Trump didn't like order. What Trump liked was having people fight in front of him and then he'd make a decision, just like he'd made snap decisions when his children presented licensing deals for the Trump Organization. This kind of dysfunction enabled a 'floater' like Kushner, whose job was undefined, to weigh in on any topic in front of Trump and have far more influence than he would have had in a top-down hierarchy." (Pg. 125)

She recounts, "Another epic [Steve] Bannon/Ivanka fight came when bannon was in the Oval Office dining room while Trump was watching TV and eating his lunch Ivanka marched in, claiming Bannon had leaked H.R. McMaster's war plan [Bannon said] 'No, that was leaked by McMaster ' Trump [told her], 'Hey, baby, I think Steve's right on this one ' Bannon thought he would be fired on the spot. But he'd learned something important: much as Trump loved his daughter and hated saying no to her, he was not always controlled by her." (Pg. 138-139)

She notes, "[Ivanka] also found a way to be near Trump when he received phone calls from foreign dignitaries -- while she still owned her business. While Ivanka's behavior was irritating, Kushner was playing a game on a whole different level: he was playing for serious money at the time of the Qatari blockade Kushner's family had been courting the Qataris for financial help and had been turned town. When that story broke the blockade and the Trump administration's response to it suddenly all made sense." (Pg. 156)

Arguing that "Kushner was behind the decision to fire [FBI Director James] Comey" (Pg. 163-164), "Quickly, Trump realized he'd made an error, and blamed Kushner. It seemed clear to Trump's advisers, and not for the first time, that he wished Kushner were not in the White House. He said to Kushner in front of senior staff, 'Just go back to New York, man '" (Pg. 167) She adds, "[Ivanka's] reluctance to speak frankly to her father was the antithesis of the story she had been pushing in the media Ivanka had told Gayle King 'Where I disagree with my father, he knows it. And I express myself with total candor.'" (Pg. 170)

She states, "at the Group of 20 summit in Germany she briefly took her father's seat when he had to step out The gesture seemed to send the message that the U.S. government was now run on nepotism." (Pg. 182)

E-mails from George Nader [an adviser to Shiekh Mohammed bin Zayed Al Nahyan, the crown prince of Abu Dhabi] "made it clear that Kushner's friends in the Gulf mocked him behind his back Nader wrote 'Nobody would even waste a cup of coffee on him if it wasn't for who he was married to.'" (Pg. 206)

She points out, "since October 2017, hundreds of children had been taken from their parents while attempting to cross the U.S.-Mexico border and detained separately news shows everywhere showed heartbreaking images of young children being detained. The next month, Ivanka posted on Instagram a photograph of herself holding her youngest child in his pajamas. Not for the first time, her tone-deaf social media post was slammed as being isolated in her elitist, insulated wealthy world On June 20, Trump signed an executive order that apparently ended the border separations. Minutes later, Ivanka finally spoke publicly on the issue Her tactic here was tell the public you care about an issue; watch silently while your father does the exact opposite; and when he moves a little, take all the credit." (Pg. 225)

She asserts, "Kushner's friendship with a Saudi crown prince was now under widespread scrutiny [because] Rather than expressing moral outrage over the cold-blooded murder of an innocent man [Saudi journalist Jamal Khashoggi], Kushner did what he always does in a crisis: he went quiet." (Pg. 232)

She concludes, "Ivanka Trump has made no secret of the fact that she wants to be the most powerful woman in the world. Her father's reign in Washington, D.C., is, she believes, the beginning of a great American dynasty Ivanka has been carefully positioning herself as [Trump's] political heir " (Pg. 236)

While not as "scandalous" as the book's subtitle might suggest, this is a very interesting book that will be of great interest to those wanting information about these crucial members of the Trump family and presidency.

[Mar 28, 2019] Was MAGA is con job ?

Notable quotes:
"... Until the Crash of the Great Recession, after which we entered a "Punitive" stage, blaming "Those Others" for buying into faulty housing deals, for wanting a safety net of health care insurance, for resurgent terrorism beyond our borders, and, as the article above indicates, for having an equal citizen's voice in the electoral process. ..."
"... What needs to be restored is the purpose that "the economy works for the PEOPLE of the nation", not the other way around, as we've witnessed for the last four decades. ..."
Feb 26, 2019 |

Kindle Customer, December 8, 2018

5.0 out of 5 stars How and Why the MAGA-myth Consumed Itself

Just finished reading this excellent book on how corporatist NeoLiberalism and the Xristianists merged their ideologies to form the Conservative Coalition in the 1970s, and to then hijack the RepubliCAN party of Abe, Teddy, Ike (and Poppy Bush).

The author describes three phases of the RepugliCONs' zero-sum game:

The "Combative" stage of Reagan sought to restore "family values" (aka patriarchal hierarchy) to the moral depravity of Sixties youth and the uppity claims to equal rights by blacks and feminists.

In the "Normative" stage of Gingrich and W Bush, the NeoConservatives claimed victory over Godless Communism and the NeoLibs took credit for an expanding economy (due mostly by technology, not to Fed policy). They were happy to say "Aren't you happy now?" with sole ownership of the Free World and its markets, yet ignoring various Black Swan events and global trends they actually had no control over.

Until the Crash of the Great Recession, after which we entered a "Punitive" stage, blaming "Those Others" for buying into faulty housing deals, for wanting a safety net of health care insurance, for resurgent terrorism beyond our borders, and, as the article above indicates, for having an equal citizen's voice in the electoral process.

What was unexpected was that the libertarian mutiny by the TeaParty would become so nasty and vicious, leading to the Pirate Trump to scavenge what little was left of American Democracy for his own treasure.

What needs to be restored is the purpose that "the economy works for the PEOPLE of the nation", not the other way around, as we've witnessed for the last four decades.

[Jan 14, 2019] Spygate: The Attempted Sabotage of Donald J. Trump

Notable quotes:
"... Elections are just for show like many trials in the old USSR. The in power Party is the power NOT the individual voting citizens. In the end this book is about exposing the pernicious activities of those who would place themselves above the voting citizens of America. ..."
Jan 14, 2019 |

Johnny G 5.0 out of 5 stars The Complex Made Easy! October 9, 2018 Format: Hardcover Verified Purchase

Regardless of your politics this is a must read book. The authors do a wonderful job of peeling back the layered onion that is being referred to as "Spy Gate." The book reads like an imaginative spy thriller. Except it is as real a fist in the stomach or the death of your best friend. In this case it is our Constitution that is victimized by individuals entrusted with "protecting and defending it from all enemies DOMESTIC and foreign."

Tis is in many ways a sad tail of ambition, weak men, political operatives & hubris ridden bureaucrats. The end result IF this type of activity is not punished and roundly condemned by ALL Americans could be a descent into Solzhenitsyn's GULAG type of Deep State government run by unaccountable political appointees and bureaucrats.

Elections are just for show like many trials in the old USSR. The in power Party is the power NOT the individual voting citizens. In the end this book is about exposing the pernicious activities of those who would place themselves above the voting citizens of America. ALL Americans should be aware of those forces seen and unseen that seek to injure our Constitutional Republic. This book is footnoted extensively lest anyone believes it is a polemic political offering.

JAK 5.0 out of 5 stars The truth hurts and that's the truth October 11, 2018 Format: Hardcover Verified Purchase

This book has content that you will not see or find anywhere else. while the topic itself is covered elsewhere in large mainstream media Outlets the truth of what is actually happening is rarely ever exposed.

If there was a six-star recommendation or anything higher because the truth is all that matters, he would receive it.

This book is put together with so many far-left (CNN, BLOOMBERG, DLSTE, YAHOO ECT) leading news stories as being able to support the fact of what happened, it's possible to say oh well that just didn't happen but it was reported by the left and when you put all of the pieces of the puzzle together it is painfully obvious to see what happened......

If these people involved don't go to jail the death of our Republic has already happened

[Mar 19, 2018] PyCharm - Python IDE Full Review

An increasingly popular installation method: "snap install pycharm-community --classic".
Mar 19, 2018 |

​Pycharm is a powerful Integrated Development Environment that can be used to develop Python applications, web apps, and even data analysis tools. Pycharm has everything a python developer needs to develop. The IDE is full of surprises and keyboard shortcuts that will leave you impressed and at the same time satisfied that your projects are completed on time. Good work from JetBrains. Couldn't have done any better.

[Dec 16, 2017] 3. Data model -- Python 3.6.4rc1 documentation

Notable quotes:
"... __slots__ ..."
"... Note that the current implementation only supports function attributes on user-defined functions. Function attributes on built-in functions may be supported in the future. ..."
"... generator function ..."
"... coroutine function ..."
"... asynchronous generator function ..."
"... operator overloading ..."
"... __init_subclass__ ..."
"... context manager ..."
"... asynchronous iterable ..."
"... asynchronous iterator ..."
"... asynchronous iterator ..."
"... asynchronous context manager ..."
"... context manager ..."
Dec 16, 2017 |
Table Of Contents Previous topic

3. Data model

3.1. Objects, values and types

Objects are Python's abstraction for data. All data in a Python program is represented by objects or by relations between objects. (In a sense, and in conformance to Von Neumann's model of a "stored program computer," code is also represented by objects.)

Every object has an identity, a type and a value. An object's identity never changes once it has been created; you may think of it as the object's address in memory. The ' is ' operator compares the identity of two objects; the id() function returns an integer representing its identity.

CPython implementation detail: For CPython, id(x) is the memory address where is stored.

An object's type determines the operations that the object supports (e.g., "does it have a length?") and also defines the possible values for objects of that type. The type() function returns an object's type (which is an object itself). Like its identity, an object's type is also unchangeable. [1]

The value of some objects can change. Objects whose value can change are said to be mutable ; objects whose value is unchangeable once they are created are called immutable . (The value of an immutable container object that contains a reference to a mutable object can change when the latter's value is changed; however the container is still considered immutable, because the collection of objects it contains cannot be changed. So, immutability is not strictly the same as having an unchangeable value, it is more subtle.) An object's mutability is determined by its type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are mutable.

Objects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether -- it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable.

CPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage, which collects most objects as soon as they become unreachable, but is not guaranteed to collect garbage containing circular references. See the documentation of the gc module for information on controlling the collection of cyclic garbage. Other implementations act differently and CPython may change. Do not depend on immediate finalization of objects when they become unreachable (so you should always close files explicitly).

Note that the use of the implementation's tracing or debugging facilities may keep objects alive that would normally be collectable. Also note that catching an exception with a ' try except ' statement may keep objects alive.

Some objects contain references to "external" resources such as open files or windows. It is understood that these resources are freed when the object is garbage-collected, but since garbage collection is not guaranteed to happen, such objects also provide an explicit way to release the external resource, usually a close() method. Programs are strongly recommended to explicitly close such objects. The ' try finally ' statement and the ' with ' statement provide convenient ways to do this.

Some objects contain references to other objects; these are called containers . Examples of containers are tuples, lists and dictionaries. The references are part of a container's value. In most cases, when we talk about the value of a container, we imply the values, not the identities of the contained objects; however, when we talk about the mutability of a container, only the identities of the immediately contained objects are implied. So, if an immutable container (like a tuple) contains a reference to a mutable object, its value changes if that mutable object is changed.

Types affect almost all aspects of object behavior. Even the importance of object identity is affected in some sense: for immutable types, operations that compute new values may actually return a reference to any existing object with the same type and value, while for mutable objects this is not allowed. E.g., after 1; , and may or may not refer to the same object with the value one, depending on the implementation, but after []; [] , and are guaranteed to refer to two different, unique, newly created empty lists. (Note that [] assigns the same object to both and .) 3.2. The standard type hierarchy

Below is a list of the types that are built into Python. Extension modules (written in C, Java, or other languages, depending on the implementation) can define additional types. Future versions of Python may add types to the type hierarchy (e.g., rational numbers, efficiently stored arrays of integers, etc.), although such additions will often be provided via the standard library instead.

Some of the type descriptions below contain a paragraph listing 'special attributes.' These are attributes that provide access to the implementation and are not intended for general use. Their definition may change in the future.


This type has a single value. There is a single object with this value. This object is accessed through the built-in name None . It is used to signify the absence of a value in many situations, e.g., it is returned from functions that don't explicitly return anything. Its truth value is false.


This type has a single value. There is a single object with this value. This object is accessed through the built-in name NotImplemented . Numeric methods and rich comparison methods should return this value if they do not implement the operation for the operands provided. (The interpreter will then try the reflected operation, or some other fallback, depending on the operator.) Its truth value is true.

See Implementing the arithmetic operations for more details.


This type has a single value. There is a single object with this value. This object is accessed through the literal ... or the built-in name Ellipsis . Its truth value is true.


These are created by numeric literals and returned as results by arithmetic operators and arithmetic built-in functions. Numeric objects are immutable; once created their value never changes. Python numbers are of course strongly related to mathematical numbers, but subject to the limitations of numerical representation in computers.

Python distinguishes between integers, floating point numbers, and complex numbers:


These represent elements from the mathematical set of integers (positive and negative).

There are two types of integers:

Integers ( int )

These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2's complement which gives the illusion of an infinite string of sign bits extending to the left.
Booleans ( bool )

These represent the truth values False and True. The two objects representing the values False and True are the only Boolean objects. The Boolean type is a subtype of the integer type, and Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings "False" or "True" are returned, respectively.

The rules for integer representation are intended to give the most meaningful interpretation of shift and mask operations involving negative integers.

numbers.Real ( float )

These represent machine-level double precision floating point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating point numbers.

numbers.Complex ( complex )

These represent complex numbers as a pair of machine-level double precision floating point numbers. The same caveats apply as for floating point numbers. The real and imaginary parts of a complex number can be retrieved through the read-only attributes z.real and z.imag .


These represent finite ordered sets indexed by non-negative numbers. The built-in function len() returns the number of items of a sequence. When the length of a sequence is n , the index set contains the numbers 0, 1, , n -1. Item i of sequence a is selected by a[i] .

Sequences also support slicing: a[i:j] selects all items with index k such that i <= k < j . When used as an expression, a slice is a sequence of the same type. This implies that the index set is renumbered so that it starts at 0.

Some sequences also support "extended slicing" with a third "step" parameter: a[i:j:k] selects all items of a with index x where n*k , n >= and i <= x < j .

Sequences are distinguished according to their mutability:

Immutable sequences

An object of an immutable sequence type cannot change once it is created. (If the object contains references to other objects, these other objects may be mutable and may be changed; however, the collection of objects directly referenced by an immutable object cannot change.)

The following types are immutable sequences:


A string is a sequence of values that represent Unicode code points. All the code points in the range U+0000 U+10FFFF can be represented in a string. Python doesn't have a char type; instead, every code point in the string is represented as a string object with length . The built-in function ord() converts a code point from its string form to an integer in the range 10FFFF ; chr() converts an integer in the range 10FFFF to the corresponding length string object. str.encode() can be used to convert a str to bytes using the given text encoding, and bytes.decode() can be used to achieve the opposite.


The items of a tuple are arbitrary Python objects. Tuples of two or more items are formed by comma-separated lists of expressions. A tuple of one item (a 'singleton') can be formed by affixing a comma to an expression (an expression by itself does not create a tuple, since parentheses must be usable for grouping of expressions). An empty tuple can be formed by an empty pair of parentheses.


A bytes object is an immutable array. The items are 8-bit bytes, represented by integers in the range 0 <= x < 256. Bytes literals (like b'abc' ) and the built-in bytes() constructor can be used to create bytes objects. Also, bytes objects can be decoded to strings via the decode() method.

Mutable sequences

Mutable sequences can be changed after they are created. The subscription and slicing notations can be used as the target of assignment and del (delete) statements.

There are currently two intrinsic mutable sequence types:


The items of a list are arbitrary Python objects. Lists are formed by placing a comma-separated list of expressions in square brackets. (Note that there are no special cases needed to form lists of length 0 or 1.)

Byte Arrays

A bytearray object is a mutable array. They are created by the built-in bytearray() constructor. Aside from being mutable (and hence unhashable), byte arrays otherwise provide the same interface and functionality as immutable bytes objects.

The extension module array provides an additional example of a mutable sequence type, as does the collections module.

Set types

These represent unordered, finite sets of unique, immutable objects. As such, they cannot be indexed by any subscript. However, they can be iterated over, and the built-in function len() returns the number of items in a set. Common uses for sets are fast membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference.

For set elements, the same immutability rules apply as for dictionary keys. Note that numeric types obey the normal rules for numeric comparison: if two numbers compare equal (e.g., and 1.0 ), only one of them can be contained in a set.

There are currently two intrinsic set types:


These represent a mutable set. They are created by the built-in set() constructor and can be modified afterwards by several methods, such as add() .

Frozen sets

These represent an immutable set. They are created by the built-in frozenset() constructor. As a frozenset is immutable and hashable , it can be used again as an element of another set, or as a dictionary key.


These represent finite sets of objects indexed by arbitrary index sets. The subscript notation a[k] selects the item indexed by from the mapping ; this can be used in expressions and as the target of assignments or del statements. The built-in function len() returns the number of items in a mapping.

There is currently a single intrinsic mapping type:


These represent finite sets of objects indexed by nearly arbitrary values. The only types of values not acceptable as keys are values containing lists or dictionaries or other mutable types that are compared by value rather than by object identity, the reason being that the efficient implementation of dictionaries requires a key's hash value to remain constant. Numeric types used for keys obey the normal rules for numeric comparison: if two numbers compare equal (e.g., and 1.0 ) then they can be used interchangeably to index the same dictionary entry.

Dictionaries are mutable; they can be created by the {...} notation (see section Dictionary displays ).

The extension modules dbm.ndbm and dbm.gnu provide additional examples of mapping types, as does the collections module.

Callable types

These are the types to which the function call operation (see section Calls ) can be applied:

User-defined functions

A user-defined function object is created by a function definition (see section Function definitions ). It should be called with an argument list containing the same number of items as the function's formal parameter list.

Special attributes:

Attribute Meaning
__doc__ The function's documentation string, or None if unavailable; not inherited by subclasses Writable
__name__ The function's name Writable

The function's qualified name

New in version 3.3.
__module__ The name of the module the function was defined in, or None if unavailable. Writable
__defaults__ A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value Writable
__code__ The code object representing the compiled function body. Writable
__globals__ A reference to the dictionary that holds the function's global variables -- the global namespace of the module in which the function was defined. Read-only
__dict__ The namespace supporting arbitrary function attributes. Writable
__closure__ None or a tuple of cells that contain bindings for the function's free variables. Read-only
__annotations__ A dict containing annotations of parameters. The keys of the dict are the parameter names, and 'return' for the return annotation, if provided. Writable
__kwdefaults__ A dict containing defaults for keyword-only parameters. Writable

Most of the attributes labelled "Writable" check the type of the assigned value.

Function objects also support getting and setting arbitrary attributes, which can be used, for example, to attach metadata to functions. Regular attribute dot-notation is used to get and set such attributes. Note that the current implementation only supports function attributes on user-defined functions. Function attributes on built-in functions may be supported in the future.

Additional information about a function's definition can be retrieved from its code object; see the description of internal types below.

Instance methods

An instance method object combines a class, a class instance and any callable object (normally a user-defined function).

Special read-only attributes: __self__ is the class instance object, __func__ is the function object; __doc__ is the method's documentation (same as __func__.__doc__ ); __name__ is the method name (same as __func__.__name__ ); __module__ is the name of the module the method was defined in, or None if unavailable.

Methods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.

User-defined method objects may be created when getting an attribute of a class (perhaps via an instance of that class), if that attribute is a user-defined function object or a class method object.

When an instance method object is created by retrieving a user-defined function object from a class via one of its instances, its __self__ attribute is the instance, and the method object is said to be bound. The new method's __func__ attribute is the original function object.

When a user-defined method object is created by retrieving another method object from a class or instance, the behaviour is the same as for a function object, except that the __func__ attribute of the new instance is not the original method object but its __func__ attribute.

When an instance method object is created by retrieving a class method object from a class or instance, its __self__ attribute is the class itself, and its __func__ attribute is the function object underlying the class method.

When an instance method object is called, the underlying function ( __func__ ) is called, inserting the class instance ( __self__ ) in front of the argument list. For instance, when is a class which contains a definition for a function f() , and is an instance of , calling x.f(1) is equivalent to calling C.f(x, 1) .

When an instance method object is derived from a class method object, the "class instance" stored in __self__ will actually be the class itself, so that calling either x.f(1) or C.f(1) is equivalent to calling f(C,1) where is the underlying function.

Note that the transformation from function object to instance method object happens each time the attribute is retrieved from the instance. In some cases, a fruitful optimization is to assign the attribute to a local variable and call that local variable. Also notice that this transformation only happens for user-defined functions; other callable objects (and all non-callable objects) are retrieved without transformation. It is also important to note that user-defined functions which are attributes of a class instance are not converted to bound methods; this only happens when the function is an attribute of the class.

Generator functions

A function or method which uses the yield statement (see section The yield statement ) is called a generator function . Such a function, when called, always returns an iterator object which can be used to execute the body of the function: calling the iterator's iterator.__next__() method will cause the function to execute until it provides a value using the yield statement. When the function executes a return statement or falls off the end, a StopIteration exception is raised and the iterator will have reached the end of the set of values to be returned.

Coroutine functions

A function or method which is defined using async def is called a coroutine function . Such a function, when called, returns a coroutine object. It may contain await expressions, as well as async with and async for statements. See also the Coroutine Objects section.

Asynchronous generator functions

A function or method which is defined using async def and which uses the yield statement is called a asynchronous generator function . Such a function, when called, returns an asynchronous iterator object which can be used in an async for statement to execute the body of the function.

Calling the asynchronous iterator's aiterator.__anext__() method will return an awaitable which when awaited will execute until it provides a value using the yield expression. When the function executes an empty return statement or falls off the end, a StopAsyncIteration exception is raised and the asynchronous iterator will have reached the end of the set of values to be yielded.

Built-in functions

A built-in function object is a wrapper around a C function. Examples of built-in functions are len() and math.sin() ( math is a standard built-in module). The number and type of the arguments are determined by the C function. Special read-only attributes: __doc__ is the function's documentation string, or None if unavailable; __name__ is the function's name; __self__ is set to None (but see the next item); __module__ is the name of the module the function was defined in or None if unavailable.

Built-in methods

This is really a different disguise of a built-in function, this time containing an object passed to the C function as an implicit extra argument. An example of a built-in method is alist.append() , assuming alist is a list object. In this case, the special read-only attribute __self__ is set to the object denoted by alist .

Classes are callable. These objects normally act as factories for new instances of themselves, but variations are possible for class types that override __new__() . The arguments of the call are passed to __new__() and, in the typical case, to __init__() to initialize the new instance.
Class Instances
Instances of arbitrary classes can be made callable by defining a __call__() method in their class.

Modules are a basic organizational unit of Python code, and are created by the import system as invoked either by the import statement (see import ), or by calling functions such as importlib.import_module() and built-in __import__() . A module object has a namespace implemented by a dictionary object (this is the dictionary referenced by the __globals__ attribute of functions defined in the module). Attribute references are translated to lookups in this dictionary, e.g., m.x is equivalent to m.__dict__["x"] . A module object does not contain the code object used to initialize the module (since it isn't needed once the initialization is done).

Attribute assignment updates the module's namespace dictionary, e.g., m.x is equivalent to m.__dict__["x"] .

Predefined (writable) attributes: __name__ is the module's name; __doc__ is the module's documentation string, or None if unavailable; __annotations__ (optional) is a dictionary containing variable annotations collected during module body execution; __file__ is the pathname of the file from which the module was loaded, if it was loaded from a file. The __file__ attribute may be missing for certain types of modules, such as C modules that are statically linked into the interpreter; for extension modules loaded dynamically from a shared library, it is the pathname of the shared library file.

Special read-only attribute: __dict__ is the module's namespace as a dictionary object.

CPython implementation detail: Because of the way CPython clears module dictionaries, the module dictionary will be cleared when the module falls out of scope even if the dictionary still has live references. To avoid this, copy the dictionary or keep the module around while using its dictionary directly.
Custom classes

Custom class types are typically created by class definitions (see section Class definitions ). A class has a namespace implemented by a dictionary object. Class attribute references are translated to lookups in this dictionary, e.g., C.x is translated to C.__dict__["x"] (although there are a number of hooks which allow for other means of locating attributes). When the attribute name is not found there, the attribute search continues in the base classes. This search of the base classes uses the C3 method resolution order which behaves correctly even in the presence of 'diamond' inheritance structures where there are multiple inheritance paths leading back to a common ancestor. Additional details on the C3 MRO used by Python can be found in the documentation accompanying the 2.3 release at .

When a class attribute reference (for class , say) would yield a class method object, it is transformed into an instance method object whose __self__ attributes is . When it would yield a static method object, it is transformed into the object wrapped by the static method object. See section Implementing Descriptors for another way in which attributes retrieved from a class may differ from those actually contained in its __dict__ .

Class attribute assignments update the class's dictionary, never the dictionary of a base class.

A class object can be called (see above) to yield a class instance (see below).

Special attributes: __name__ is the class name; __module__ is the module name in which the class was defined; __dict__ is the dictionary containing the class's namespace; __bases__ is a tuple containing the base classes, in the order of their occurrence in the base class list; __doc__ is the class's documentation string, or None if undefined; __annotations__ (optional) is a dictionary containing variable annotations collected during class body execution.

Class instances

A class instance is created by calling a class object (see above). A class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance's class has an attribute by that name, the search continues with the class attributes. If a class attribute is found that is a user-defined function object, it is transformed into an instance method object whose __self__ attribute is the instance. Static method and class method objects are also transformed; see above under "Classes". See section Implementing Descriptors for another way in which attributes of a class retrieved via its instances may differ from the objects actually stored in the class's __dict__ . If no class attribute is found, and the object's class has a __getattr__() method, that is called to satisfy the lookup.

Attribute assignments and deletions update the instance's dictionary, never a class's dictionary. If the class has a __setattr__() or __delattr__() method, this is called instead of updating the instance dictionary directly.

Class instances can pretend to be numbers, sequences, or mappings if they have methods with certain special names. See section Special method names .

Special attributes: __dict__ is the attribute dictionary; __class__ is the instance's class.

I/O objects (also known as file objects)

A file object represents an open file. Various shortcuts are available to create file objects: the open() built-in function, and also os.popen() , os.fdopen() , and the makefile() method of socket objects (and perhaps by other functions or methods provided by extension modules).

The objects sys.stdin , sys.stdout and sys.stderr are initialized to file objects corresponding to the interpreter's standard input, output and error streams; they are all open in text mode and therefore follow the interface defined by the io.TextIOBase abstract class.

Internal types

A few types used internally by the interpreter are exposed to the user. Their definitions may change with future versions of the interpreter, but they are mentioned here for completeness.

Code objects

Code objects represent byte-compiled executable Python code, or bytecode . The difference between a code object and a function object is that the function object contains an explicit reference to the function's globals (the module in which it was defined), while a code object contains no context; also the default argument values are stored in the function object, not in the code object (because they represent values calculated at run-time). Unlike function objects, code objects are immutable and contain no references (directly or indirectly) to mutable objects.

Special read-only attributes: co_name gives the function name; co_argcount is the number of positional arguments (including arguments with default values); co_nlocals is the number of local variables used by the function (including arguments); co_varnames is a tuple containing the names of the local variables (starting with the argument names); co_cellvars is a tuple containing the names of local variables that are referenced by nested functions; co_freevars is a tuple containing the names of free variables; co_code is a string representing the sequence of bytecode instructions; co_consts is a tuple containing the literals used by the bytecode; co_names is a tuple containing the names used by the bytecode; co_filename is the filename from which the code was compiled; co_firstlineno is the first line number of the function; co_lnotab is a string encoding the mapping from bytecode offsets to line numbers (for details see the source code of the interpreter); co_stacksize is the required stack size (including local variables); co_flags is an integer encoding a number of flags for the interpreter.

The following flag bits are defined for co_flags : bit 0x04 is set if the function uses the *arguments syntax to accept an arbitrary number of positional arguments; bit 0x08 is set if the function uses the **keywords syntax to accept arbitrary keyword arguments; bit 0x20 is set if the function is a generator.

Future feature declarations ( from __future__ import division ) also use bits in co_flags to indicate whether a code object was compiled with a particular feature enabled: bit 0x2000 is set if the function was compiled with future division enabled; bits 0x10 and 0x1000 were used in earlier versions of Python.

Other bits in co_flags are reserved for internal use.

If a code object represents a function, the first item in co_consts is the documentation string of the function, or None if undefined.

Frame objects

Frame objects represent execution frames. They may occur in traceback objects (see below).

Special read-only attributes: f_back is to the previous stack frame (towards the caller), or None if this is the bottom stack frame; f_code is the code object being executed in this frame; f_locals is the dictionary used to look up local variables; f_globals is used for global variables; f_builtins is used for built-in (intrinsic) names; f_lasti gives the precise instruction (this is an index into the bytecode string of the code object).

Special writable attributes: f_trace , if not None , is a function called at the start of each source code line (this is used by the debugger); f_lineno is the current line number of the frame -- writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to f_lineno.

Frame objects support one method:

frame. clear ()
This method clears all references to local variables held by the frame. Also, if the frame belonged to a generator, the generator is finalized. This helps break reference cycles involving frame objects (for example when catching an exception and storing its traceback for later use).

RuntimeError is raised if the frame is currently executing.

New in version 3.4.
Traceback objects

Traceback objects represent a stack trace of an exception. A traceback object is created when an exception occurs. When the search for an exception handler unwinds the execution stack, at each unwound level a traceback object is inserted in front of the current traceback. When an exception handler is entered, the stack trace is made available to the program. (See section The try statement .) It is accessible as the third item of the tuple returned by sys.exc_info() . When the program contains no suitable handler, the stack trace is written (nicely formatted) to the standard error stream; if the interpreter is interactive, it is also made available to the user as sys.last_traceback .

Special read-only attributes: tb_next is the next level in the stack trace (towards the frame where the exception occurred), or None if there is no next level; tb_frame points to the execution frame of the current level; tb_lineno gives the line number where the exception occurred; tb_lasti indicates the precise instruction. The line number and last instruction in the traceback may differ from the line number of its frame object if the exception occurred in a try statement with no matching except clause or with a finally clause.

Slice objects

Slice objects are used to represent slices for __getitem__() methods. They are also created by the built-in slice() function.

Special read-only attributes: start is the lower bound; stop is the upper bound; step is the step value; each is None if omitted. These attributes can have any type.

Slice objects support one method:

slice. indices self , length
This method takes a single integer argument length and computes information about the slice that the slice object would describe if applied to a sequence of length items. It returns a tuple of three integers; respectively these are the start and stop indices and the step or stride length of the slice. Missing or out-of-bounds indices are handled in a manner consistent with regular slices.
Static method objects
Static method objects provide a way of defeating the transformation of function objects to method objects described above. A static method object is a wrapper around any other object, usually a user-defined method object. When a static method object is retrieved from a class or a class instance, the object actually returned is the wrapped object, which is not subject to any further transformation. Static method objects are not themselves callable, although the objects they wrap usually are. Static method objects are created by the built-in staticmethod() constructor.
Class method objects
A class method object, like a static method object, is a wrapper around another object that alters the way in which that object is retrieved from classes and class instances. The behaviour of class method objects upon such retrieval is described above, under "User-defined methods". Class method objects are created by the built-in classmethod() constructor.
3.3. Special method names

A class can implement certain operations that are invoked by special syntax (such as arithmetic operations or subscripting and slicing) by defining methods with special names. This is Python's approach to operator overloading , allowing classes to define their own behavior with respect to language operators. For instance, if a class defines a method named __getitem__() , and is an instance of this class, then x[i] is roughly equivalent to type(x).__getitem__(x, i) . Except where mentioned, attempts to execute an operation raise an exception when no appropriate method is defined (typically AttributeError or TypeError ).

Setting a special method to None indicates that the corresponding operation is not available. For example, if a class sets __iter__() to None , the class is not iterable, so calling iter() on its instances will raise a TypeError (without falling back to __getitem__() ). [2]

When implementing a class that emulates any built-in type, it is important that the emulation only be implemented to the degree that it makes sense for the object being modelled. For example, some sequences may work well with retrieval of individual elements, but extracting a slice may not make sense. (One example of this is the NodeList interface in the W3C's Document Object Model.)

3.3.1. Basic customization
object. __new__ cls , ...

Called to create a new instance of class cls . __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of __new__() should be the new object instance (usually an instance of cls ).

Typical implementations create a new instance of the class by invoking the superclass's __new__() method using super().__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it.

If __new__() returns an instance of cls , then the new instance's __init__() method will be invoked like __init__(self[, ...]) , where self is the new instance and the remaining arguments are the same as were passed to __new__() .

If __new__() does not return an instance of cls , then the new instance's __init__() method will not be invoked.

__new__() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.

object. __init__ self , ...

Called after the instance has been created (by __new__() ), but before it is returned to the caller. The arguments are those passed to the class constructor expression. If a base class has an __init__() method, the derived class's __init__() method, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example: super().__init__([args...]) .

Because __new__() and __init__() work together in constructing objects ( __new__() to create it, and __init__() to customize it), no non- None value may be returned by __init__() ; doing so will cause a TypeError to be raised at runtime.

object. __del__ self

Called when the instance is about to be destroyed. This is also called a destructor. If a base class has a __del__() method, the derived class's __del__() method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance. Note that it is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. It may then be called at a later time when this new reference is deleted. It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.


del doesn't directly call x.__del__() -- the former decrements the reference count for by one, and the latter is only called when 's reference count reaches zero. Some common situations that may prevent the reference count of an object from going to zero include: circular references between objects (e.g., a doubly-linked list or a tree data structure with parent and child pointers); a reference to the object on the stack frame of a function that caught an exception (the traceback stored in sys.exc_info()[2] keeps the stack frame alive); or a reference to the object on the stack frame that raised an unhandled exception in interactive mode (the traceback stored in sys.last_traceback keeps the stack frame alive). The first situation can only be remedied by explicitly breaking the cycles; the second can be resolved by freeing the reference to the traceback object when it is no longer useful, and the third can be resolved by storing None in sys.last_traceback . Circular references which are garbage are detected and cleaned up when the cyclic garbage collector is enabled (it's on by default). Refer to the documentation for the gc module for more information about this topic.


Due to the precarious circumstances under which __del__() methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to sys.stderr instead. Also, when __del__() is invoked in response to a module being deleted (e.g., when execution of the program is done), other globals referenced by the __del__() method may already have been deleted or in the process of being torn down (e.g. the import machinery shutting down). For this reason, __del__() methods should do the absolute minimum needed to maintain external invariants. Starting with version 1.5, Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the __del__() method is called.

object. __repr__ self
Called by the repr() built-in function to compute the "official" string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned. The return value must be a string object. If a class defines __repr__() but not __str__() , then __repr__() is also used when an "informal" string representation of instances of that class is required.

This is typically used for debugging, so it is important that the representation is information-rich and unambiguous.

object. __str__ self
Called by str(object) and the built-in functions format() and print() to compute the "informal" or nicely printable string representation of an object. The return value must be a string object.

This method differs from object.__repr__() in that there is no expectation that __str__() return a valid Python expression: a more convenient or concise representation can be used.

The default implementation defined by the built-in type object calls object.__repr__() .

object. __bytes__ self

Called by bytes to compute a byte-string representation of an object. This should return a bytes object.

object. __format__ self , format_spec
Called by the format() built-in function, and by extension, evaluation of formatted string literals and the str.format() method, to produce a "formatted" string representation of an object. The format_spec argument is a string that contains a description of the formatting options desired. The interpretation of the format_spec argument is up to the type implementing __format__() , however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax.

See Format Specification Mini-Language for a description of the standard formatting syntax.

The return value must be a string object.

Changed in version 3.4: The __format__ method of object itself raises a TypeError if passed any non-empty string.
object. __lt__ self , other
object. __le__ self , other
object. __eq__ self , other
object. __ne__ self , other
object. __gt__ self , other
object. __ge__ self , other

These are the so-called "rich comparison" methods. The correspondence between operator symbols and method names is as follows: x<y calls x.__lt__(y) , x<=y calls x.__le__(y) , x==y calls x.__eq__(y) , x!=y calls x.__ne__(y) , x>y calls x.__gt__(y) , and x>=y calls x.__ge__(y) .

A rich comparison method may return the singleton NotImplemented if it does not implement the operation for a given pair of arguments. By convention, False and True are returned for a successful comparison. However, these methods can return any value, so if the comparison operator is used in a Boolean context (e.g., in the condition of an if statement), Python will call bool() on the value to determine if the result is true or false.

By default, __ne__() delegates to __eq__() and inverts the result unless it is NotImplemented . There are no other implied relationships among the comparison operators, for example, the truth of (x<y or x==y) does not imply x<=y . To automatically generate ordering operations from a single root operation, see functools.total_ordering() .

See the paragraph on __hash__() for some important notes on creating hashable objects which support custom comparison operations and are usable as dictionary keys.

There are no swapped-argument versions of these methods (to be used when the left argument does not support the operation but the right argument does); rather, __lt__() and __gt__() are each other's reflection, __le__() and __ge__() are each other's reflection, and __eq__() and __ne__() are their own reflection. If the operands are of different types, and right operand's type is a direct or indirect subclass of the left operand's type, the reflected method of the right operand has priority, otherwise the left operand's method has priority. Virtual subclassing is not considered.

object. __hash__ self

Called by built-in function hash() and for operations on members of hashed collections including set , frozenset , and dict . __hash__() should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to mix together the hash values of the components of the object that also play a part in comparison of objects by packing them into a tuple and hashing the tuple. Example:

def __hash__(self):
    return hash((, self.nick, self.color))


hash() truncates the value returned from an object's custom __hash__() method to the size of a Py_ssize_t . This is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit builds. If an object's __hash__() must interoperate on builds of different bit sizes, be sure to check the width on all supported builds. An easy way to do this is with python -c "import sys; print(sys.hash_info.width)" .

If a class does not define an __eq__() method it should not define a __hash__() operation either; if it defines __eq__() but not __hash__() , its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an __eq__() method, it should not implement __hash__() , since the implementation of hashable collections requires that a key's hash value is immutable (if the object's hash value changes, it will be in the wrong hash bucket).

User-defined classes have __eq__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns an appropriate value such that == implies both that is and hash(x) == hash(y) .

A class that overrides __eq__() and does not define __hash__() will have its __hash__() implicitly set to None . When the __hash__() method of a class is None , instances of the class will raise an appropriate TypeError when a program attempts to retrieve their hash value, and will also be correctly identified as unhashable when checking isinstance(obj, collections.Hashable) .

If a class that overrides __eq__() needs to retain the implementation of __hash__() from a parent class, the interpreter must be told this explicitly by setting __hash__ <ParentClass>.__hash__ .

If a class that does not override __eq__() wishes to suppress hash support, it should include __hash__ None in the class definition. A class which defines its own __hash__() that explicitly raises a TypeError would be incorrectly identified as hashable by an isinstance(obj, collections.Hashable) call.


By default, the __hash__() values of str, bytes and datetime objects are "salted" with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.

This is intended to provide protection against a denial-of-service caused by carefully-chosen inputs that exploit the worst case performance of a dict insertion, O(n^2) complexity. See for details.

Changing hash values affects the iteration order of dicts, sets and other mappings. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).

See also PYTHONHASHSEED . Changed in version 3.3: Hash randomization is enabled by default.

object. __bool__ self

Called to implement truth value testing and the built-in operation bool() ; should return False or True . When this method is not defined, __len__() is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither __len__() nor __bool__() , all its instances are considered true.

3.3.2. Customizing attribute access

The following methods can be defined to customize the meaning of attribute access (use of, assignment to, or deletion of ) for class instances.

object. __getattr__ self , name
Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self ). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception.

Note that if the attribute is found through the normal mechanism, __getattr__() is not called. (This is an intentional asymmetry between __getattr__() and __setattr__() .) This is done both for efficiency reasons and because otherwise __getattr__() would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the __getattribute__() method below for a way to actually get total control over attribute access.

object. __getattribute__ self , name
Called unconditionally to implement attribute accesses for instances of the class. If the class also defines __getattr__() , the latter will not be called unless __getattribute__() either calls it explicitly or raises an AttributeError . This method should return the (computed) attribute value or raise an AttributeError exception. In order to avoid infinite recursion in this method, its implementation should always call the base class method with the same name to access any attributes it needs, for example, object.__getattribute__(self, name) .


This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup .

object. __setattr__ self , name , value
Called when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.

If __setattr__() wants to assign to an instance attribute, it should call the base class method with the same name, for example, object.__setattr__(self, name, value) .

object. __delattr__ self , name
Like __setattr__() but for attribute deletion instead of assignment. This should only be implemented if del is meaningful for the object.
object. __dir__ self
Called when dir() is called on the object. A sequence must be returned. dir() converts the returned sequence to a list and sorts it. Implementing Descriptors

The following methods only apply when an instance of the class containing the method (a so-called descriptor class) appears in an owner class (the descriptor must be in either the owner's class dictionary or in the class dictionary for one of its parents). In the examples below, "the attribute" refers to the attribute whose name is the key of the property in the owner class' __dict__ .

object. __get__ self , instance , owner
Called to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). owner is always the owner class, while instance is the instance that the attribute was accessed through, or None when the attribute is accessed through the owner . This method should return the (computed) attribute value or raise an AttributeError exception.
object. __set__ self , instance , value
Called to set the attribute on an instance instance of the owner class to a new value, value .
object. __delete__ self , instance
Called to delete the attribute on an instance instance of the owner class.
object. __set_name__ self , owner , name
Called at the time the owning class owner is created. The descriptor has been assigned to name . New in version 3.6.

The attribute __objclass__ is interpreted by the inspect module as specifying the class where this object was defined (setting this appropriately can assist in runtime introspection of dynamic class attributes). For callables, it may indicate that an instance of the given type (or a subclass) is expected or required as the first positional argument (for example, CPython sets this attribute for unbound methods that are implemented in C). Invoking Descriptors

In general, a descriptor is an object attribute with "binding behavior", one whose attribute access has been overridden by methods in the descriptor protocol: __get__() , __set__() , and __delete__() . If any of those methods are defined for an object, it is said to be a descriptor.

The default behavior for attribute access is to get, set, or delete the attribute from an object's dictionary. For instance, a.x has a lookup chain starting with a.__dict__['x'] , then type(a).__dict__['x'] , and continuing through the base classes of type(a) excluding metaclasses.

However, if the looked-up value is an object defining one of the descriptor methods, then Python may override the default behavior and invoke the descriptor method instead. Where this occurs in the precedence chain depends on which descriptor methods were defined and how they were called.

The starting point for descriptor invocation is a binding, a.x . How the arguments are assembled depends on :

Direct Call
The simplest and least common call is when user code directly invokes a descriptor method: x.__get__(a) .
Instance Binding
If binding to an object instance, a.x is transformed into the call: type(a).__dict__['x'].__get__(a, type(a)) .
Class Binding
If binding to a class, A.x is transformed into the call: A.__dict__['x'].__get__(None, A) .
Super Binding
If is an instance of super , then the binding super(B, obj).m() searches obj.__class__.__mro__ for the base class immediately preceding and then invokes the descriptor with the call: A.__dict__['m'].__get__(obj, obj.__class__) .

For instance bindings, the precedence of descriptor invocation depends on the which descriptor methods are defined. A descriptor can define any combination of __get__() , __set__() and __delete__() . If it does not define __get__() , then accessing the attribute will return the descriptor object itself unless there is a value in the object's instance dictionary. If the descriptor defines __set__() and/or __delete__() , it is a data descriptor; if it defines neither, it is a non-data descriptor. Normally, data descriptors define both __get__() and __set__() , while non-data descriptors have just the __get__() method. Data descriptors with __set__() and __get__() defined always override a redefinition in an instance dictionary. In contrast, non-data descriptors can be overridden by instances.

Python methods (including staticmethod() and classmethod() ) are implemented as non-data descriptors. Accordingly, instances can redefine and override methods. This allows individual instances to acquire behaviors that differ from other instances of the same class.

The property() function is implemented as a data descriptor. Accordingly, instances cannot override the behavior of a property. __slots__

By default, instances of classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.

The default can be overridden by defining __slots__ in a class definition. The __slots__ declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__ is not created for each instance.

object. __slots__
This class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. __slots__ reserves space for the declared variables and prevents the automatic creation of __dict__ and __weakref__ for each instance. Notes on using __slots__
3.3.3. Customizing class creation

Whenever a class inherits from another class, __init_subclass__ is called on that class. This way, it is possible to write classes which change the behavior of subclasses. This is closely related to class decorators, but where class decorators only affect the specific class they're applied to, __init_subclass__ solely applies to future subclasses of the class defining the method.

classmethod object. __init_subclass__ cls
This method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method.

Keyword arguments which are given to a new class are passed to the parent's class __init_subclass__ . For compatibility with other classes using __init_subclass__ , one should take out the needed keyword arguments and pass the others over to the base class, as in:

class Philosopher:
    def __init_subclass__(cls, default_name, **kwargs):
        cls.default_name = default_name

class AustralianPhilosopher(Philosopher, default_name="Bruce"):

The default implementation object.__init_subclass__ does nothing, but raises an error if it is called with any arguments.


The metaclass hint metaclass is consumed by the rest of the type machinery, and is never passed to __init_subclass__ implementations. The actual metaclass (rather than the explicit hint) can be accessed as type(cls) . New in version 3.6. Metaclasses

By default, classes are constructed using type() . The class body is executed in a new namespace and the class name is bound locally to the result of type(name, bases, namespace) .

The class creation process can be customized by passing the metaclass keyword argument in the class definition line, or by inheriting from an existing class that included such an argument. In the following example, both MyClass and MySubclass are instances of Meta :

class Meta(type):

class MyClass(metaclass=Meta):

class MySubclass(MyClass):

Any other keyword arguments that are specified in the class definition are passed through to all metaclass operations described below.

When a class definition is executed, the following steps occur: Determining the appropriate metaclass

The appropriate metaclass for a class definition is determined as follows:

The most derived metaclass is selected from the explicitly specified metaclass (if any) and the metaclasses (i.e. type(cls) ) of all specified base classes. The most derived metaclass is one which is a subtype of all of these candidate metaclasses. If none of the candidate metaclasses meets that criterion, then the class definition will fail with TypeError . Preparing the class namespace

Once the appropriate metaclass has been identified, then the class namespace is prepared. If the metaclass has a __prepare__ attribute, it is called as namespace metaclass.__prepare__(name, bases, **kwds) (where the additional keyword arguments, if any, come from the class definition).

If the metaclass has no __prepare__ attribute, then the class namespace is initialised as an empty ordered mapping.

See also

PEP 3115 - Metaclasses in Python 3000
Introduced the __prepare__ namespace hook Executing the class body

The class body is executed (approximately) as exec(body, globals(), namespace) . The key difference from a normal call to exec() is that lexical scoping allows the class body (including any methods) to reference names from the current and outer scopes when the class definition occurs inside a function.

However, even when the class definition occurs inside the function, methods defined inside the class still cannot see names defined at the class scope. Class variables must be accessed through the first parameter of instance or class methods, or through the implicit lexically scoped __class__ reference described in the next section. Creating the class object

Once the class namespace has been populated by executing the class body, the class object is created by calling metaclass(name, bases, namespace, **kwds) (the additional keywords passed here are the same as those passed to __prepare__ ).

This class object is the one that will be referenced by the zero-argument form of super() . __class__ is an implicit closure reference created by the compiler if any methods in a class body refer to either __class__ or super . This allows the zero argument form of super() to correctly identify the class being defined based on lexical scoping, while the class or instance that was used to make the current call is identified based on the first argument passed to the method.

CPython implementation detail: In CPython 3.6 and later, the __class__ cell is passed to the metaclass as a __classcell__ entry in the class namespace. If present, this must be propagated up to the type.__new__ call in order for the class to be initialised correctly. Failing to do so will result in a DeprecationWarning in Python 3.6, and a RuntimeWarning in the future.

When using the default metaclass type , or any metaclass that ultimately calls type.__new__ , the following additional customisation steps are invoked after creating the class object:

After the class object is created, it is passed to the class decorators included in the class definition (if any) and the resulting object is bound in the local namespace as the defined class.

When a new class is created by type.__new__ , the object provided as the namespace parameter is copied to a new ordered mapping and the original object is discarded. The new copy is wrapped in a read-only proxy, which becomes the __dict__ attribute of the class object.

See also

PEP 3135 - New super
Describes the implicit __class__ closure reference Metaclass example

The potential uses for metaclasses are boundless. Some ideas that have been explored include enum, logging, interface checking, automatic delegation, automatic property creation, proxies, frameworks, and automatic resource locking/synchronization.

Here is an example of a metaclass that uses an collections.OrderedDict to remember the order that class variables are defined:

class OrderedClass(type):

    def __prepare__(metacls, name, bases, **kwds):
        return collections.OrderedDict()

    def __new__(cls, name, bases, namespace, **kwds):
        result = type.__new__(cls, name, bases, dict(namespace))
        result.members = tuple(namespace)
        return result

class A(metaclass=OrderedClass):
    def one(self): pass
    def two(self): pass
    def three(self): pass
    def four(self): pass

>>> A.members
('__module__', 'one', 'two', 'three', 'four')

When the class definition for A gets executed, the process begins with calling the metaclass's __prepare__() method which returns an empty collections.OrderedDict . That mapping records the methods and attributes of A as they are defined within the body of the class statement. Once those definitions are executed, the ordered dictionary is fully populated and the metaclass's __new__() method gets invoked. That method builds the new type and it saves the ordered dictionary keys in an attribute called members . 3.3.4. Customizing instance and subclass checks

The following methods are used to override the default behavior of the isinstance() and issubclass() built-in functions.

In particular, the metaclass abc.ABCMeta implements these methods in order to allow the addition of Abstract Base Classes (ABCs) as "virtual base classes" to any class or type (including built-in types), including other ABCs.

class. __instancecheck__ self , instance
Return true if instance should be considered a (direct or indirect) instance of class . If defined, called to implement isinstance(instance, class) .
class. __subclasscheck__ self , subclass
Return true if subclass should be considered a (direct or indirect) subclass of class . If defined, called to implement issubclass(subclass, class) .

Note that these methods are looked up on the type (metaclass) of a class. They cannot be defined as class methods in the actual class. This is consistent with the lookup of special methods that are called on instances, only in this case the instance is itself a class.

See also

PEP 3119 - Introducing Abstract Base Classes
Includes the specification for customizing isinstance() and issubclass() behavior through __instancecheck__() and __subclasscheck__() , with motivation for this functionality in the context of adding Abstract Base Classes (see the abc module) to the language.
3.3.5. Emulating callable objects
object. __call__ self , args...

Called when the instance is "called" as a function; if this method is defined, x(arg1, arg2, ...) is a shorthand for x.__call__(arg1, arg2, ...) .

3.3.6. Emulating container types

The following methods can be defined to implement container objects. Containers usually are sequences (such as lists or tuples) or mappings (like dictionaries), but can represent other containers as well. The first set of methods is used either to emulate a sequence or to emulate a mapping; the difference is that for a sequence, the allowable keys should be the integers k for which <= < where N is the length of the sequence, or slice objects, which define a range of items. It is also recommended that mappings provide the methods keys() , values() , items() , get() , clear() , setdefault() , pop() , popitem() , copy() , and update() behaving similar to those for Python's standard dictionary objects. The collections module provides a MutableMapping abstract base class to help create those methods from a base set of __getitem__() , __setitem__() , __delitem__() , and keys() . Mutable sequences should provide methods append() , count() , index() , extend() , insert() , pop() , remove() , reverse() and sort() , like Python standard list objects. Finally, sequence types should implement addition (meaning concatenation) and multiplication (meaning repetition) by defining the methods __add__() , __radd__() , __iadd__() , __mul__() , __rmul__() and __imul__() described below; they should not define other numerical operators. It is recommended that both mappings and sequences implement the __contains__() method to allow efficient use of the in operator; for mappings, in should search the mapping's keys; for sequences, it should search through the values. It is further recommended that both mappings and sequences implement the __iter__() method to allow efficient iteration through the container; for mappings, __iter__() should be the same as keys() ; for sequences, it should iterate through the values.

object. __len__ self

Called to implement the built-in function len() . Should return the length of the object, an integer >= 0. Also, an object that doesn't define a __bool__() method and whose __len__() method returns zero is considered to be false in a Boolean context.

CPython implementation detail: In CPython, the length is required to be at most sys.maxsize . If the length is larger than sys.maxsize some features (such as len() ) may raise OverflowError . To prevent raising OverflowError by truth value testing, an object must define a __bool__() method.
object. __length_hint__ self
Called to implement operator.length_hint() . Should return an estimated length for the object (which may be greater or less than the actual length). The length must be an integer >= 0. This method is purely an optimization and is never required for correctness. New in version 3.4.


Slicing is done exclusively with the following three methods. A call like

a[1:2] = b

is translated to

a[slice(1, 2, None)] = b

and so forth. Missing slice items are always filled in with None .

object. __getitem__ self , key

Called to implement evaluation of self[key] . For sequence types, the accepted keys should be integers and slice objects. Note that the special interpretation of negative indexes (if the class wishes to emulate a sequence type) is up to the __getitem__() method. If key is of an inappropriate type, TypeError may be raised; if of a value outside the set of indexes for the sequence (after any special interpretation of negative values), IndexError should be raised. For mapping types, if key is missing (not in the container), KeyError should be raised.


for loops expect that an IndexError will be raised for illegal indexes to allow proper detection of the end of the sequence.

object. __missing__ self , key
Called by dict . __getitem__() to implement self[key] for dict subclasses when key is not in the dictionary.
object. __setitem__ self , key , value
Called to implement assignment to self[key] . Same note as for __getitem__() . This should only be implemented for mappings if the objects support changes to the values for keys, or if new keys can be added, or for sequences if elements can be replaced. The same exceptions should be raised for improper key values as for the __getitem__() method.
object. __delitem__ self , key
Called to implement deletion of self[key] . Same note as for __getitem__() . This should only be implemented for mappings if the objects support removal of keys, or for sequences if elements can be removed from the sequence. The same exceptions should be raised for improper key values as for the __getitem__() method.
object. __iter__ self
This method is called when an iterator is required for a container. This method should return a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container.

Iterator objects also need to implement this method; they are required to return themselves. For more information on iterator objects, see Iterator Types .

object. __reversed__ self
Called (if present) by the reversed() built-in to implement reverse iteration. It should return a new iterator object that iterates over all the objects in the container in reverse order.

If the __reversed__() method is not provided, the reversed() built-in will fall back to using the sequence protocol ( __len__() and __getitem__() ). Objects that support the sequence protocol should only provide __reversed__() if they can provide an implementation that is more efficient than the one provided by reversed() .

The membership test operators ( in and not in ) are normally implemented as an iteration through a sequence. However, container objects can supply the following special method with a more efficient implementation, which also does not require the object be a sequence.

object. __contains__ self , item
Called to implement membership test operators. Should return true if item is in self , false otherwise. For mapping objects, this should consider the keys of the mapping rather than the values or the key-item pairs.

For objects that don't define __contains__() , the membership test first tries iteration via __iter__() , then the old sequence iteration protocol via __getitem__() , see this section in the language reference .

3.3.7. Emulating numeric types

The following methods can be defined to emulate numeric objects. Methods corresponding to operations that are not supported by the particular kind of number implemented (e.g., bitwise operations for non-integral numbers) should be left undefined.

object. __add__ self , other
object. __sub__ self , other
object. __mul__ self , other
object. __matmul__ self , other
object. __truediv__ self , other
object. __floordiv__ self , other
object. __mod__ self , other
object. __divmod__ self , other
object. __pow__ self , other modulo
object. __lshift__ self , other
object. __rshift__ self , other
object. __and__ self , other
object. __xor__ self , other
object. __or__ self , other

These methods are called to implement the binary arithmetic operations ( , , , , , // , , divmod() , pow() , ** , << , >> , & , , ). For instance, to evaluate the expression , where x is an instance of a class that has an __add__() method, x.__add__(y) is called. The __divmod__() method should be the equivalent to using __floordiv__() and __mod__() ; it should not be related to __truediv__() . Note that __pow__() should be defined to accept an optional third argument if the ternary version of the built-in pow() function is to be supported.

If one of those methods does not support the operation with the supplied arguments, it should return NotImplemented .

object. __radd__ self , other
object. __rsub__ self , other
object. __rmul__ self , other
object. __rmatmul__ self , other
object. __rtruediv__ self , other
object. __rfloordiv__ self , other
object. __rmod__ self , other
object. __rdivmod__ self , other
object. __rpow__ self , other
object. __rlshift__ self , other
object. __rrshift__ self , other
object. __rand__ self , other
object. __rxor__ self , other
object. __ror__ self , other

These methods are called to implement the binary arithmetic operations ( , , , , , // , , divmod() , pow() , ** , << , >> , & , , ) with reflected (swapped) operands. These functions are only called if the left operand does not support the corresponding operation [3] and the operands are of different types. [4] For instance, to evaluate the expression , where y is an instance of a class that has an __rsub__() method, y.__rsub__(x) is called if x.__sub__(y) returns NotImplemented .

Note that ternary pow() will not try calling __rpow__() (the coercion rules would become too complicated).


If the right operand's type is a subclass of the left operand's type and that subclass provides the reflected method for the operation, this method will be called before the left operand's non-reflected method. This behavior allows subclasses to override their ancestors' operations.

object. __iadd__ self , other
object. __isub__ self , other
object. __imul__ self , other
object. __imatmul__ self , other
object. __itruediv__ self , other
object. __ifloordiv__ self , other
object. __imod__ self , other
object. __ipow__ self , other modulo
object. __ilshift__ self , other
object. __irshift__ self , other
object. __iand__ self , other
object. __ixor__ self , other
object. __ior__ self , other
These methods are called to implement the augmented arithmetic assignments ( += , -= , *= , @= , /= , //= , %= , **= , <<= , >>= , &= , ^= , |= ). These methods should attempt to do the operation in-place (modifying self ) and return the result (which could be, but does not have to be, self ). If a specific method is not defined, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an __iadd__() method, += is equivalent to x.__iadd__(y) . Otherwise, x.__add__(y) and y.__radd__(x) are considered, as with the evaluation of . In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += ['item'] raise an exception when the addition works? ), but this behavior is in fact part of the data model.
object. __neg__ self
object. __pos__ self
object. __abs__ self
object. __invert__ self

Called to implement the unary arithmetic operations ( , , abs() and ).

object. __complex__ self
object. __int__ self
object. __float__ self
object. __round__ self , n

Called to implement the built-in functions complex() , int() , float() and round() . Should return a value of the appropriate type.

object. __index__ self
Called to implement operator.index() , and whenever Python needs to losslessly convert the numeric object to an integer object (such as in slicing, or in the built-in bin() , hex() and oct() functions). Presence of this method indicates that the numeric object is an integer type. Must return an integer.


In order to have a coherent integer type class, when __index__() is defined __int__() should also be defined, and both should return the same value.

3.3.8. With Statement Context Managers

A context manager is an object that defines the runtime context to be established when executing a with statement. The context manager handles the entry into, and the exit from, the desired runtime context for the execution of the block of code. Context managers are normally invoked using the with statement (described in section The with statement ), but can also be used by directly invoking their methods.

Typical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc.

For more information on context managers, see Context Manager Types .

object. __enter__ self
Enter the runtime context related to this object. The with statement will bind this method's return value to the target(s) specified in the as clause of the statement, if any.
object. __exit__ self , exc_type , exc_value , traceback
Exit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be None .

If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.

Note that __exit__() methods should not reraise the passed-in exception; this is the caller's responsibility.

See also

PEP 343 - The "with" statement
The specification, background, and examples for the Python with statement.
3.3.9. Special method lookup

For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object's type, not in the object's instance dictionary. That behaviour is the reason why the following code raises an exception:

>>> class C:
...     pass
>>> c = C()
>>> c.__len__ = lambda: 5
>>> len(c)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: object of type 'C' has no len()

The rationale behind this behaviour lies with a number of special methods such as __hash__() and __repr__() that are implemented by all objects, including type objects. If the implicit lookup of these methods used the conventional lookup process, they would fail when invoked on the type object itself:

>>> 1 .__hash__() == hash(1)
>>> int.__hash__() == hash(int)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: descriptor '__hash__' of 'int' object needs an argument

Incorrectly attempting to invoke an unbound method of a class in this way is sometimes referred to as 'metaclass confusion', and is avoided by bypassing the instance when looking up special methods:

>>> type(1).__hash__(1) == hash(1)
>>> type(int).__hash__(int) == hash(int)

In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object's metaclass:

>>> class Meta(type):
...     def __getattribute__(*args):
...         print("Metaclass getattribute invoked")
...         return type.__getattribute__(*args)
>>> class C(object, metaclass=Meta):
...     def __len__(self):
...         return 10
...     def __getattribute__(*args):
...         print("Class getattribute invoked")
...         return object.__getattribute__(*args)
>>> c = C()
>>> c.__len__()                 # Explicit lookup via instance
Class getattribute invoked
>>> type(c).__len__(c)          # Explicit lookup via type
Metaclass getattribute invoked
>>> len(c)                      # Implicit lookup

Bypassing the __getattribute__() machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods (the special method must be set on the class object itself in order to be consistently invoked by the interpreter). 3.4. Coroutines 3.4.1. Awaitable Objects

An awaitable object generally implements an __await__() method. Coroutine objects returned from async def functions are awaitable.


The generator iterator objects returned from generators decorated with types.coroutine() or asyncio.coroutine() are also awaitable, but they do not implement __await__() .

object. __await__ self
Must return an iterator . Should be used to implement awaitable objects. For instance, asyncio.Future implements this method to be compatible with the await expression.
New in version 3.5.

See also

PEP 492 for additional information about awaitable objects. 3.4.2. Coroutine Objects

Coroutine objects are awaitable objects. A coroutine's execution can be controlled by calling __await__() and iterating over the result. When the coroutine has finished executing and returns, the iterator raises StopIteration , and the exception's value attribute holds the return value. If the coroutine raises an exception, it is propagated by the iterator. Coroutines should not directly raise unhandled StopIteration exceptions.

Coroutines also have the methods listed below, which are analogous to those of generators (see Generator-iterator methods ). However, unlike generators, coroutines do not directly support iteration.

Changed in version 3.5.2: It is a RuntimeError to await on a coroutine more than once.
coroutine. send value
Starts or resumes execution of the coroutine. If value is None , this is equivalent to advancing the iterator returned by __await__() . If value is not None , this method delegates to the send() method of the iterator that caused the coroutine to suspend. The result (return value, StopIteration , or other exception) is the same as when iterating over the __await__() return value, described above.
coroutine. throw type , value traceback ]]
Raises the specified exception in the coroutine. This method delegates to the throw() method of the iterator that caused the coroutine to suspend, if it has such a method. Otherwise, the exception is raised at the suspension point. The result (return value, StopIteration , or other exception) is the same as when iterating over the __await__() return value, described above. If the exception is not caught in the coroutine, it propagates back to the caller.
coroutine. close ()
Causes the coroutine to clean itself up and exit. If the coroutine is suspended, this method first delegates to the close() method of the iterator that caused the coroutine to suspend, if it has such a method. Then it raises GeneratorExit at the suspension point, causing the coroutine to immediately clean itself up. Finally, the coroutine is marked as having finished executing, even if it was never started.

Coroutine objects are automatically closed using the above process when they are about to be destroyed.

3.4.3. Asynchronous Iterators

An asynchronous iterable is able to call asynchronous code in its __aiter__ implementation, and an asynchronous iterator can call asynchronous code in its __anext__ method.

Asynchronous iterators can be used in an async for statement.

object. __aiter__ self
Must return an asynchronous iterator object.
object. __anext__ self
Must return an awaitable resulting in a next value of the iterator. Should raise a StopAsyncIteration error when the iteration is over.

An example of an asynchronous iterable object:

class Reader:
    async def readline(self):

    def __aiter__(self):
        return self

    async def __anext__(self):
        val = await self.readline()
        if val == b'':
            raise StopAsyncIteration
        return val
New in version 3.5.


Changed in version 3.5.2: Starting with CPython 3.5.2, __aiter__ can directly return asynchronous iterators . Returning an awaitable object will result in a PendingDeprecationWarning .

The recommended way of writing backwards compatible code in CPython 3.5.x is to continue returning awaitables from __aiter__ . If you want to avoid the PendingDeprecationWarning and keep the code backwards compatible, the following decorator can be used:

import functools
import sys

if sys.version_info < (3, 5, 2):
    def aiter_compat(func):
        async def wrapper(self):
            return func(self)
        return wrapper
    def aiter_compat(func):
        return func


class AsyncIterator:

    def __aiter__(self):
        return self

    async def __anext__(self):

Starting with CPython 3.6, the PendingDeprecationWarning will be replaced with the DeprecationWarning . In CPython 3.7, returning an awaitable from __aiter__ will result in a RuntimeError . 3.4.4. Asynchronous Context Managers

An asynchronous context manager is a context manager that is able to suspend execution in its __aenter__ and __aexit__ methods.

Asynchronous context managers can be used in an async with statement.

object. __aenter__ self
This method is semantically similar to the __enter__() , with only difference that it must return an awaitable .
object. __aexit__ self , exc_type , exc_value , traceback
This method is semantically similar to the __exit__() , with only difference that it must return an awaitable .

An example of an asynchronous context manager class:

class AsyncContextManager:
    async def __aenter__(self):
        await log('entering context')

    async def __aexit__(self, exc_type, exc, tb):
        await log('exiting context')
New in version 3.5.


[1] It is possible in some cases to change an object's type, under certain controlled conditions. It generally isn't a good idea though, since it can lead to some very strange behaviour if it is handled incorrectly.
[2] The __hash__() , __iter__() , __reversed__() , and __contains__() methods have special handling for this; others will still raise a TypeError , but may do so by relying on the behavior that None is not callable.
[3] "Does not support" here means that the class has no such method, or the method returns NotImplemented . Do not set the method to None if you want to force fallback to the right operand's reflected method -- that will instead have the opposite effect of explicitly blocking such fallback.
[4] For operands of the same type, it is assumed that if the non-reflected method (such as __add__() ) fails the operation is not supported, which is why the reflected method is not called.

[Dec 07, 2017] Variable's memory size in Python - Stack Overflow

Dec 07, 2017 |

casevh ,Jan 17, 2013 at 5:03

Regarding the internal structure of a Python long, check sys.int_info (or sys.long_info for Python 2.7).
>>> import sys
>>> sys.int_info
sys.int_info(bits_per_digit=30, sizeof_digit=4)

Python either stores 30 bits into 4 bytes (most 64-bit systems) or 15 bits into 2 bytes (most 32-bit systems). Comparing the actual memory usage with calculated values, I get

>>> import math, sys
>>> a=0
>>> sys.getsizeof(a)
>>> a=2**100
>>> sys.getsizeof(a)
>>> a=2**1000
>>> sys.getsizeof(a)
>>> 24+4*math.ceil(100/30)
>>> 24+4*math.ceil(1000/30)

There are 24 bytes of overhead for 0 since no bits are stored. The memory requirements for larger values matches the calculated values.

If your numbers are so large that you are concerned about the 6.25% unused bits, you should probably look at the gmpy2 library. The internal representation uses all available bits and computations are significantly faster for large values (say, greater than 100 digits).

[Dec 07, 2017] Variables and scope -- Object-Oriented Programming in Python 1 documentation

Notable quotes:
"... class attributes ..."
"... instance attributes ..."
"... alter the existing value ..."
"... implicit conversion ..."
Dec 07, 2017 |

Variables and scope Variables

Recall that a variable is a label for a location in memory. It can be used to hold a value. In statically typed languages, variables have predetermined types, and a variable can only be used to hold values of that type. In Python, we may reuse the same variable to store values of any type.

A variable is similar to the memory functionality found in most calculators, in that it holds one value which can be retrieved many times, and that storing a new value erases the old. A variable differs from a calculator's memory in that one can have many variables storing different values, and that each variable is referred to by name.

Defining variables

To define a new variable in Python, we simply assign a value to a label. For example, this is how we create a variable called count , which contains an integer value of zero:

count = 0

This is exactly the same syntax as assigning a new value to an existing variable called count . Later in this chapter we will discuss under what circumstances this statement will cause a new variable to be created.

If we try to access the value of a variable which hasn't been defined anywhere yet, the interpreter will exit with a name error.

We can define several variables in one line, but this is usually considered bad style:

# Define three variables at once:
count, result, total = 0, 0, 0

# This is equivalent to:
count = 0
result = 0
total = 0

In keeping with good programming style, we should make use of meaningful names for variables. Variable scope and lifetime

Not all variables are accessible from all parts of our program, and not all variables exist for the same amount of time. Where a variable is accessible and how long it exists depend on how it is defined. We call the part of a program where a variable is accessible its scope , and the duration for which the variable exists its lifetime .

A variable which is defined in the main body of a file is called a global variable. It will be visible throughout the file, and also inside any file which imports that file. Global variables can have unintended consequences because of their wide-ranging effects – that is why we should almost never use them. Only objects which are intended to be used globally, like functions and classes, should be put in the global namespace.

A variable which is defined inside a function is local to that function. It is accessible from the point at which it is defined until the end of the function, and exists for as long as the function is executing. The parameter names in the function definition behave like local variables, but they contain the values that we pass into the function when we call it. When we use the assignment operator ( ) inside a function, its default behaviour is to create a new local variable – unless a variable with the same name is already defined in the local scope.

Here is an example of variables in different scopes:

# This is a global variable
a = 0

if a == 0:
    # This is still a global variable
    b = 1

def my_function(c):
    # this is a local variable
    d = 3

# Now we call the function, passing the value 7 as the first and only parameter

# a and b still exist

# c and d don't exist anymore -- these statements will give us name errors!


The inside of a class body is also a new local variable scope. Variables which are defined in the class body (but outside any class method) are called class attributes . They can be referenced by their bare names within the same scope, but they can also be accessed from outside this scope if we use the attribute access operator ( ) on a class or an instance (an object which uses that class as its type). An attribute can also be set explicitly on an instance or class from inside a method. Attributes set on instances are called instance attributes . Class attributes are shared between all instances of a class, but each instance has its own separate instance attributes. We will look at this in greater detail in the chapter about classes. The assignment operator

As we saw in the previous sections, the assignment operator in Python is a single equals sign ( ). This operator assigns the value on the right hand side to the variable on the left hand side, sometimes creating the variable first. If the right hand side is an expression (such as an arithmetic expression), it will be evaluated before the assignment occurs. Here are a few examples:

a_number = 5              # a_number becomes 5
a_number = total          # a_number becomes the value of total
a_number = total + 5      # a_number becomes the value of total + 5
a_number = a_number + 1   # a_number becomes the value of a_number + 1

The last statement might look a bit strange if we were to interpret as a mathematical equals sign – clearly a number cannot be equal to the same number plus one! Remember that is an assignment operator – this statement is assigning a new value to the variable a_number which is equal to the old value of a_number plus one.

Assigning an initial value to variable is called initialising the variable. In some languages defining a variable can be done in a separate step before the first value assignment. It is thus possible in those languages for a variable to be defined but not have a value – which could lead to errors or unexpected behaviour if we try to use the value before it has been assigned. In Python a variable is defined and assigned a value in a single step, so we will almost never encounter situations like this.

The left hand side of the assignment statement must be a valid target:

# this is fine:
a = 3

# these are all illegal:
3 = 4
3 = a
a + b = 3

An assignment statement may have multiple targets separated by equals signs. The expression on the right hand side of the last equals sign will be assigned to all the targets. All the targets must be valid:

# both a and b will be set to zero:
a = b = 0

# this is illegal, because we can't set 0 to b:
a = 0 = b
Compound assignment operators

We have already seen that we can assign the result of an arithmetic expression to a variable:

total = a + b + c + 50

Counting is something that is done often in a program. For example, we might want to keep count of how many times a certain event occurs by using a variable called count . We would initialise this variable to zero and add one to it every time the event occurs. We would perform the addition with this statement:

count = count + 1

This is in fact a very common operation. Python has a shorthand operator, += , which lets us express it more cleanly, without having to write the name of the variable twice:

# These statements mean exactly the same thing:
count = count + 1
count += 1

# We can increment a variable by any number we like.
count += 2
count += 7
count += a + b

There is a similar operator, -= , which lets us decrement numbers:

# These statements mean exactly the same thing:
count = count - 3
count -= 3

Other common compound assignment operators are given in the table below:

Operator Example Equivalent to
+= +=
-= -=
*= *=
/= /=
%= %=
More about scope: crossing boundaries

What if we want to access a global variable from inside a function? It is possible, but doing so comes with a few caveats:

a = 0

def my_function():


The print statement will output , the value of the global variable , as you probably expected. But what about this program?

a = 0

def my_function():
    a = 3



When we call the function, the print statement inside outputs – but why does the print statement at the end of the program output ?

By default, the assignment statement creates variables in the local scope. So the assignment inside the function does not modify the global variable – it creates a new local variable called , and assigns the value to that variable. The first print statement outputs the value of the new local variable – because if a local variable has the same name as a global variable the local variable will always take precedence. The last print statement prints out the global variable, which has remained unchanged.

What if we really want to modify a global variable from inside a function? We can use the global keyword:

a = 0

def my_function():
    global a
    a = 3



We may not refer to both a global variable and a local variable by the same name inside the same function. This program will give us an error:

a = 0

def my_function():
    a = 3


Because we haven't declared to be global, the assignment in the second line of the function will create a local variable . This means that we can't refer to the global variable elsewhere in the function, even before this line! The first print statement now refers to the local variable – but this variable doesn't have a value in the first line, because we haven't assigned it yet!

Note that it is usually very bad practice to access global variables from inside functions, and even worse practice to modify them. This makes it difficult to arrange our program into logically encapsulated parts which do not affect each other in unexpected ways. If a function needs to access some external value, we should pass the value into the function as a parameter. If the function is a method of an object, it is sometimes appropriate to make the value an attribute of the same object – we will discuss this in the chapter about object orientation.


There is also a nonlocal keyword in Python – when we nest a function inside another function, it allows us to modify a variable in the outer function from inside the inner function (or, if the function is nested multiple times, a variable in one of the outer functions). If we use the global keyword, the assignment statement will create the variable in the global scope if it does not exist already. If we use the nonlocal keyword, however, the variable must be defined, because it is impossible for Python to determine in which scope it should be created. Exercise 1

  1. Describe the scope of the variables a , , and in this example:

    def my_function(a):
        b = a - 2
        return b
    c = 3
    if c > 2:
        d = my_function(5)
  2. What is the lifetime of these variables? When will they be created and destroyed?

  3. Can you guess what would happen if we were to assign a value of instead?

  4. Why would this be a problem? Can you think of a way to avoid it?

Modifying values Constants

In some languages, it is possible to define special variables which can be assigned a value only once – once their values have been set, they cannot be changed. We call these kinds of variables constants . Python does not allow us to set such a restriction on variables, but there is a widely used convention for marking certain variables to indicate that their values are not meant to change: we write their names in all caps, with underscores separating words:

# These variables are "constants" by convention:

# Nothing is actually stopping us from redefining them...

# ...but it's probably not a good idea.

Why do we bother defining variables that we don't intend to change? Consider this example:


tom_mark = 58
print(("Tom's mark is %.2f%%" % (tom_mark / MAXIMUM_MARK * 100)))
# %% is how we escape a literal % inside a string

There are several good reasons to define MAXIMUM_MARK instead of just writing 80 inside the print statement. First, this gives the number a descriptive label which explains what it is – this makes the code more understandable. Second, we may eventually need to refer to this number in our program more than once. If we ever need to update our code with a new value for the maximum mark, we will only have to change it in one place, instead of finding every place where it is used – such replacements are often error-prone.

Literal numbers scattered throughout a program are known as "magic numbers" – using them is considered poor coding style. This does not apply to small numbers which are considered self-explanatory – it's easy to understand why a total is initialised to zero or incremented by one.

Sometimes we want to use a variable to distinguish between several discrete options. It is useful to refer to the option values using constants instead of using them directly if the values themselves have no intrinsic meaning:

# We define some options

name = "jane"
# We use our constants when assigning these values...
print_style = UPPER

# ...and when checking them:
if print_style == LOWER:
elif print_style == UPPER:
elif print_style == CAPITAL:
    # Nothing prevents us from accidentally setting print_style to 4, 90 or
    # "spoon", so we put in this fallback just in case:
    print("Unknown style option!")

In the above example, the values , and are not important – they are completely meaningless. We could equally well use , and or the strings 'lower' , 'upper' and 'capital' . The only important thing is that the three values must be different. If we used the numbers directly instead of the constants the program would be much more confusing to read. Using meaningful strings would make the code more readable, but we could accidentally make a spelling mistake while setting one of the values and not notice – if we mistype the name of one of the constants we are more likely to get an error straight away.

Some Python libraries define common constants for our convenience, for example:

# we need to import these libraries before we use them
import string
import math
import re

# All the lowercase ASCII letters: 'abcdefghijklmnopqrstuvwxyz'

# The mathematical constants pi and e, both floating-point numbers
print(math.pi) # ratio of circumference of a circle to its diameter
print(math.e) # natural base of logarithms

# This integer is an option which we can pass to functions in the re
# (regular expression) library.

Note that many built-in constants don't follow the all-caps naming convention. Mutable and immutable types

Some values in python can be modified, and some cannot. This does not ever mean that we can't change the value of a variable – but if a variable contains a value of an immutable type , we can only assign it a new value . We cannot alter the existing value in any way.

Integers, floating-point numbers and strings are all immutable types – in all the previous examples, when we changed the values of existing variables we used the assignment operator to assign them new values:

a = 3
a = 2

b = "jane"
b = "bob"

Even this operator doesn't modify the value of total in-place – it also assigns a new value:

total += 4

We haven't encountered any mutable types yet, but we will use them extensively in later chapters. Lists and dictionaries are mutable, and so are most objects that we are likely to write ourselves:

# this is a list of numbers
my_list = [1, 2, 3]
my_list[0] = 5 # we can change just the first element of the list

class MyClass(object):
    pass # this is a very silly class

# Now we make a very simple object using our class as a type
my_object = MyClass()

# We can change the values of attributes on the object
my_object.some_property = 42
More about input

In the earlier sections of this unit we learned how to make a program display a message using the print function or read a string value from the user using the input function. What if we want the user to input numbers or other types of variables? We still use the input function, but we must convert the string values returned by input to the types that we want. Here is a simple example:

height = int(input("Enter height of rectangle: "))
width = int(input("Enter width of rectangle: "))

print("The area of the rectangle is %d" % (width * height))

int is a function which converts values of various types to ints. We will discuss type conversion in greater detail in the next section, but for now it is important to know that int will not be able to convert a string to an integer if it contains anything except digits. The program above will exit with an error if the user enters "aaa" , "zzz10" or even "7.5" . When we write a program which relies on user input, which can be incorrect, we need to add some safeguards so that we can recover if the user makes a mistake. For example, we can detect if the user entered bad input and exit with a nicer error message:

    height = int(input("Enter height of rectangle: "))
    width = int(input("Enter width of rectangle: "))
except ValueError as e: # if a value error occurs, we will skip to this point
    print("Error reading height and width: %s" % e)

This program will still only attempt to read in the input once, and exit if it is incorrect. If we want to keep asking the user for input until it is correct, we can do something like this:

correct_input = False # this is a boolean value -- it can be either true or false.

while not correct_input: # this is a while loop
        height = int(input("Enter height of rectangle: "))
        width = int(input("Enter width of rectangle: "))
    except ValueError:
        print("Please enter valid integers for the height and width.")
    else: # this will be executed if there is no value error
        correct_input = True

We will learn more about boolean values, loops and exceptions later. Example: calculating petrol consumption of a car

In this example, we will write a simple program which asks the user for the distance travelled by a car, and the monetary value of the petrol that was used to cover that distance. From this information, together with the price per litre of petrol, the program will calculate the efficiency of the car, both in litres per 100 kilometres and and kilometres per litre.

First we will define the petrol price as a constant at the top. This will make it easy for us to update the price when it changes on the first Wednesday of every month:


When the program starts,we want to print out a welcome message:

print("*** Welcome to the fuel efficiency calculator! ***\n")
# we add an extra blank line after the message with \n

Ask the user for his or her name:

name = input("Enter your name: ")

Ask the user for the distance travelled:

# float is a function which converts values to floating-point numbers.
distance_travelled = float(input("Enter distance travelled in km: "))

Then ask the user for the amount paid:

amount_paid = float(input("Enter monetary value of fuel bought for the trip: R"))

Now we will do the calculations:

fuel_consumed = amount_paid / PETROL_PRICE_PER_LITRE

efficiency_l_per_100_km = fuel_consumed / distance_travelled * 100
efficiency_km_per_l = distance_travelled / fuel_consumed

Finally, we output the results:

print("Hi, %s!" % name)
print("Your car's efficiency is %.2f litres per 100 km." % efficiency_l_per_100_km)
print("This means that you can travel %.2f km on a litre of petrol." % efficiency_km_per_l)

# we add an extra blank line before the message with \n
print("\nThanks for using the program.")
Exercise 2
  1. Write a Python program to convert a temperature given in degrees Fahrenheit to its equivalent in degrees Celsius. You can assume that T_c = (5/9) x (T_f - 32) , where T_c is the temperature in °C and T_f is the temperature in °F. Your program should ask the user for an input value, and print the output. The input and output values should be floating-point numbers.
  2. What could make this program crash? What would we need to do to handle this situation more gracefully?
Type conversion

As we write more programs, we will often find that we need to convert data from one type to another, for example from a string to an integer or from an integer to a floating-point number. There are two kinds of type conversions in Python: implicit and explicit conversions.

Implicit conversion

Recall from the section about floating-point operators that we can arbitrarily combine integers and floating-point numbers in an arithmetic expression – and that the result of any such expression will always be a floating-point number. This is because Python will convert the integers to floating-point numbers before evaluating the expression. This is an implicit conversion – we don't have to convert anything ourselves. There is usually no loss of precision when an integer is converted to a floating-point number.

For example, the integer will automatically be converted to a floating-point number in the following example:

result = 8.5 * 2

8.5 is a float while is an int . Python will automatically convert operands so that they are of the same type. In this case this is achieved if the integer is converted to the floating-point equivalent 2.0 . Then the two floating-point numbers can be multiplied.

Let's have a look at a more complex example:

result = 8.5 + 7 // 3 - 2.5

Python performs operations according to the order of precedence, and decides whether a conversion is needed on a per-operation basis. In our example // has the highest precedence, so it will be processed first. and are both integers and // is the integer division operator – the result of this operation is the integer . Now we are left with 8.5 2.5 . The addition and subtraction are at the same level of precedence, so they are evaluated left-to-right, starting with addition. First is converted to the floating-point number 2.0 , and the two floating-point numbers are added, which leaves us with 10.5 2.5 . The result of this floating-point subtraction is 2.0 , which is assigned to result . Explicit conversion

Converting numbers from float to int will result in a loss of precision. For example, try to convert 5.834 to an int – it is not possible to do this without losing precision. In order for this to happen, we must explicitly tell Python that we are aware that precision will be lost. For example, we need to tell the compiler to convert a float to an int like this:

i = int(5.834)

The int function converts a float to an int by discarding the fractional part – it will always round down! If we want more control over the way in which the number is rounded, we will need to use a different function:

# the floor and ceil functions are in the math module
import math

# ceil returns the closest integer greater than or equal to the number
# (so it always rounds up)
i = math.ceil(5.834)

# floor returns the closest integer less than or equal to the number
# (so it always rounds down)
i = math.floor(5.834)

# round returns the closest integer to the number
# (so it rounds up or down)
# Note that this is a built-in function -- we don't need to import math to use it.
i = round(5.834)

Explicit conversion is sometimes also called casting – we may read about a float being cast to int or vice-versa. Converting to and from strings

As we saw in the earlier sections, Python seldom performs implicit conversions to and from str – we usually have to convert values explicitly. If we pass a single number (or any other value) to the print function, it will be converted to a string automatically – but if we try to add a number and a string, we will get an error:

# This is OK

# This is not OK
print("3" + 4)

# Do you mean this...
print("3%d" % 4) # concatenate "3" and "4" to get "34"

# Or this?
print(int("3") + 4) # add 3 and 4 to get 7

To convert numbers to strings, we can use string formatting – this is usually the cleanest and most readable way to insert multiple values into a message. If we want to convert a single number to a string, we can also use the str function explicitly:

# These lines will do the same thing
print("3%d" % 4)
print("3" + str(4))
More about conversions

In Python, functions like str , int and float will try to convert anything to their respective types – for example, we can use the int function to convert strings to integers or to convert floating-point numbers to integers. Note that although int can convert a float to an integer it can't convert a string containing a float to an integer directly!

# This is OK

# This is OK

# This is not OK
int("3.7") # This is a string representation of a float, not an integer!

# We have to convert the string to a float first

Values of type bool can contain the value True or False . These values are used extensively in conditional statements, which execute or do not execute parts of our program depending on some binary condition:

my_flag = True

if my_flag:

The condition is often an expression which evaluates to a boolean value:

if 3 > 4:
    print("This will not be printed.")

However, almost any value can implicitly be converted to a boolean if it is used in a statement like this:

my_number = 3

if my_number:
    print("My number is non-zero!")

This usually behaves in the way that you would expect: non-zero numbers are True values and zero is False . However, we need to be careful when using strings – the empty string is treated as False , but any other string is True – even "0" and "False" !

# bool is a function which converts values to booleans
bool(34) # True
bool(0) # False
bool(1) # True

bool("") # False
bool("Jane") # True
bool("0") # True!
bool("False") # Also True!
Exercise 3
  1. Convert "8.8" to a float.
  2. Convert 8.8 to an integer (with rounding).
  3. Convert "8.8" to an integer (with rounding).
  4. Convert 8.8 to a string.
  5. Convert to a string.
  6. Convert to a float.
  7. Convert to a boolean.
Answers to exercises Answer to exercise 1
  1. is a local variable in the scope of my_function because it is an argument name. is also a local variable inside my_function , because it is assigned a value inside my_function . and are both global variables. It doesn't matter that is created inside an if block, because the inside of an if block is not a new scope – everything inside the block is part of the same scope as the outside (in this case the global scope). Only function definitions (which start with def ) and class definitions (which start with class ) indicate the start of a new level of scope.
  2. Both and will be created every time my_function is called and destroyed when my_function has finished executing. is created when it is assigned the value , and exists for the remainder of the program's execution. is created inside the if block (when it is assigned the value which is returned from the function), and also exists for the remainder of the program's execution.
  3. As we will learn in the next chapter, if blocks are executed conditionally . If were not greater than in this program, the if block would not be executed, and if that were to happen the variable would never be created.
  4. We may use the variable later in the code, assuming that it always exists, and have our program crash unexpectedly if it doesn't. It is considered poor coding practice to allow a variable to be defined or undefined depending on the outcome of a conditional statement. It is better to ensure that is always defined, no matter what – for example, by assigning it some default value at the start. It is much easier and cleaner to check if a variable has the default value than to check whether it exists at all.
Answer to exercise 2
  1. Here is an example program:

    T_f = float(input("Please enter a temperature in °F: "))
    T_c = (5/9) * (T_f - 32)
    print("%g°F = %g°C" % (T_f, T_c))


    The formatting symbol %g is used with floats, and instructs Python to pick a sensible human-readable way to display the float.

  2. The program could crash if the user enters a value which cannot be converted to a floating-point number. We would need to add some kind of error checking to make sure that this doesn't happen – for example, by storing the string value and checking its contents. If we find that the entered value is invalid, we can either print an error message and exit or keep prompting the user for input until valid input is entered.

Answer to exercise 3

Here are example answers:

import math

a_1 = float("8.8")
a_2 = math.round(8.8)
a_3 = math.round("8.8")
a_4 = "%g" % 8.8
a_5 = "%d" % 8
a_6 = float(8)
a_7 = bool(8)
Next Previous
© Copyright 2013, 2014, University of Cape Town and individual contributors. This work is released under the CC BY-SA 4.0 licence. Revision 8e685e710775 . Built with Sphinx using a theme provided by Read the Docs .

[Dec 07, 2017] BitManipulation - Python Wiki

Dec 07, 2017 |

Here is some information and goals related to Python bit manipulation, binary manipulation.

Some tasks include:

Relevant libraries include:

Some simple code is at ASPN: bit-field manipulation.

Here are some other examples.


To integer.

Toggle line numbers
   1 >>> print int('00100001', 2)
   2 33

To hex string. Note that you don't need to use x8 bits.

Toggle line numbers
   1 >>> print "0x%x" % int('11111111', 2)
   2 0xff
   3 >>> print "0x%x" % int('0110110110', 2)
   4 0x1b6
   5 >>> print "0x%x" % int('0010101110101100111010101101010111110101010101', 2)
   6 0xaeb3ab57d55

To character. 8 bits max.

Toggle line numbers
   1 >>> chr(int('111011', 2))
   2 ';'
   3 >>> chr(int('1110110', 2))
   4 'v'
   5 >>> chr(int('11101101', 2))
   6 '\xed'

Characters to integers, but not to strings of 1's and 0's.

Toggle line numbers
   1 >>> int('01110101', 2)
   2 117
   3 >>> chr(int('01110101', 2))
   4 'u'
   5 >>> ord('u')
   6 117

Individual bits.

Toggle line numbers
   1 >>> 1 << 0
   2 1
   3 >>> 1 << 1
   4 2
   5 >>> 1 << 2
   6 4
   7 >>> 1 << 3
   8 8
   9 >>> 1 << 4
  10 16
  11 >>> 1 << 5
  12 32
  13 >>> 1 << 6
  14 64
  15 >>> 1 << 7
  16 128
Transformations Summary

Strings to Integers:

Integers to Strings:

We are still left without a technique for producing binary strings, and decyphering hex strings.

Hex String to Integer

Use the int type with the base argument:

Toggle line numbers
   1 >>> int('0xff',16)
   2 255
   3 >>> int('d484fa894e',16)
   4 912764078414

Do not use alternatives that utilize eval. eval will execute code passed to it and can thus compromise the security of your program.

Integer to Bin String

Python 3 supports binary literals (e.g. 0b10011000) and has a bin() function. For older versions:

Toggle line numbers
   1 >>> def bin(a):
   2         s=''
   3         t={'0':'000','1':'001','2':'010','3':'011',
   4            '4':'100','5':'101','6':'110','7':'111'}
   5         for c in oct(a)[1:]:
   6                 s+=t[c]
   7         return s

or better:

Toggle line numbers
   1 def bin(s):
   2     return str(s) if s<=1 else bin(s>>1) + str(s&1)
Python Integers

From "The Python Language Reference" page on the Data Model:

"Integers (int) These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2's complement which gives the illusion of an infinite string of sign bits extending to the left."

Prior to Python 3.1, there was no easy way to determine how Python represented a specific integer internally, i.e. how many bits were used. Python 3.1 adds a bit_length() method to the int type that does exactly that.

Unless you know you are working with numbers that are less than a certain length, for instance numbers from arrays of integers, shifts, rotations, etc. may give unexpected results.

The number of the highest bit set is the highest power of 2 less than or equal to the input integer. This is the same as the exponent of the floating point representation of the integer, and is also called its "integer log base 2".(ref.1)

In versions before 3.1, the easiest way to determine the highest bit set is*:

* There is a long discussion on this topic, and why this method is not good, in "Issue 3439" at This discussion led up to the addition of bit_length() in Python 3.1.

Toggle line numbers
   1 import math
   3 hiBit = math.floor(math.log(int_type, 2))

An input less than or equal to 0 results in a " ValueError : math domain error"

The section "Finding integer log base 2 of an integer" on the "Bit Twiddling Hacks"(ref.1) web page includes a number of methods for determining this value for integers of known magnitude, presumably when no math coprocessor is available. The only method generally applicable to Python integers of unknown magnitude is the "obvious way" of counting the number of bitwise shift operations needed to reduce the input to 0.

Bit Length Of a Python Integer

bitLen() counts the actual bit length of a Python integer, that is, the number of the highest non-zero bit plus 1 . Zero, with no non-zero bit, returns 0. As should be expected from the quote above about "the illusion of an infinite string of sign bits extending to the left," a negative number throws the computer into an infinite loop.

The function can return any result up to the length of the largest integer your computer's memory can hold.

Toggle line numbers
   1 def bitLen(int_type):
   2     length = 0
   3     while (int_type):
   4         int_type >>= 1
   5         length += 1
   6     return(length)
   8 for i in range(17):
   9      print(bitLen(i))
  11 # results: 0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5

The method using the math module is much faster, especially on huge numbers with hundreds of decimal digits.


In common usage, the "bit count" of an integer is the number of set (1) bits, not the bit length of the integer described above. bitLen() can be modified to also provide the count of the number of set bits in the integer. There are faster methods to get the count below.

Toggle line numbers
   1 def bitLenCount(int_type):
   2     length = 0
   3     count = 0
   4     while (int_type):
   5         count += (int_type & 1)
   6         length += 1
   7         int_type >>= 1
   8     return(length, count)
Operations on Integers of Unknown Magnitude

Some procedures don't need to know the magnitude of an integer to give meaningful results.


The procedure and the information below were found in "Bit Twiddling Hacks"(ref.1)

Counting bits set, Brian Kernighan's way*

Toggle line numbers
   1 unsigned int v;          // count the number of bits set in v
   2 unsigned int c;          // c accumulates the total bits set in v
   3 for (c = 0; v; c++)
   4 {   v &= v - 1;  }       //clear the least significant bit set

This method goes through as many iterations as there are set bits. So if we have a 32-bit word with only the high bit set, then it will only go once through the loop.

* The C Programming Language 2nd Ed., Kernighan & Ritchie, 1988.

Don Knuth pointed out that this method was published by Peter Wegner in CACM 3 (1960), 322. Also discovered independently by Derrick Lehmer and published in 1964 in a book edited by Beckenbach.

Kernighan and Knuth, potent endorsements!

This works because each subtraction "borrows" from the lowest 1-bit. For example:

Toggle line numbers
   1 #       loop pass 1                 loop pass 2
   2 #      101000     101000           100000     100000
   3 #    -   #!python
   4   & 100111         -   #!python
   5   & 011111
   6 #    = 100111   = 100000         = 011111   =      0

It is an excellent technique for Python, since the size of the integer need not be determined beforehand.

Toggle line numbers
   1 def bitCount(int_type):
   2     count = 0
   3     while(int_type):
   4         int_type &= int_type - 1
   5         count += 1
   6     return(count)

From "Bit Twiddling Hacks"

Code almost identical to bitCount(), above, calculates the parity of an integer, returning 0 if there are an even number of set bits, and -1 if there are an odd number. In fact, counting the bits and checking whether the result is odd with bitcount & 1 is about the same speed as the parity function.

Toggle line numbers
   1 def parityOf(int_type):
   2     parity = 0
   3     while (int_type):
   4         parity = ~parity
   5         int_type = int_type & (int_type - 1)
   6     return(parity)

To determine the bit number of the lowest bit set in an integer, in twos-complement notation i & -i zeroes all but the lowest set bit. The bitLen() proceedure then determines its position. Obviously, negative numbers return the same result as their opposite. In this version, an input of 0 returns -1, in effect an error condition.

Toggle line numbers
   1 For example:
   2 #    00111000     # 56
   3 #    11001000     # twos complement, -56
   4 # &= 00001000
Toggle line numbers
   1 def lowestSet(int_type):
   2     low = (int_type & -int_type)
   3     lowBit = -1
   4     while (low):
   5         low >>= 1
   6         lowBit += 1
   7     return(lowBit)
Single bits

The usual single-bit operations will work on any Python integer. It is up to the programmer to be sure that the value of 'offset' makes sense in the context of the program.

Toggle line numbers
   1 # testBit() returns a nonzero result, 2**offset, if the bit at 'offset' is one.
   3 def testBit(int_type, offset):
   4     mask = 1 << offset
   5     return(int_type & mask)
   7 # setBit() returns an integer with the bit at 'offset' set to 1.
   9 def setBit(int_type, offset):
  10     mask = 1 << offset
  11     return(int_type | mask)
  13 # clearBit() returns an integer with the bit at 'offset' cleared.
  15 def clearBit(int_type, offset):
  16     mask = ~(1 << offset)
  17     return(int_type & mask)
  19 # toggleBit() returns an integer with the bit at 'offset' inverted, 0 -> 1 and 1 -> 0.
  21 def toggleBit(int_type, offset):
  22     mask = 1 << offset
  23     return(int_type ^ mask)
Bit fields, e.g. for communication protocols

If you need to interpret individual bits in some data, e.g. a byte stream in a communications protocol, you can use the ctypes module.

Toggle line numbers
   1 import ctypes
   2 c_uint8 = ctypes.c_uint8
   4 class Flags_bits( ctypes.LittleEndianStructure ):
   5     _fields_ = [
   6                 ("logout",     c_uint8, 1 ),  # asByte & 1
   7                 ("userswitch", c_uint8, 1 ),  # asByte & 2
   8                 ("suspend",    c_uint8, 1 ),  # asByte & 4
   9                 ("idle",       c_uint8, 1 ),  # asByte & 8
  10                ]
  12 class Flags( ctypes.Union ):
  13     _anonymous_ = ("bit",)
  14     _fields_ = [
  15                 ("bit",    Flags_bits ),
  16                 ("asByte", c_uint8    )
  17                ]
  19 flags = Flags()
  20 flags.asByte = 0x2  # ->0010
  22 print( "logout: %i"      % flags.bit.logout   )
  23 # `bit` is defined as anonymous field, so its fields can also be accessed directly:
  24 print( "logout: %i"      % flags.logout     )
  25 print( "userswitch:  %i" % flags.userswitch )
  26 print( "suspend   :  %i" % flags.suspend    )
  27 print( "idle  : %i"      % flags.idle       )
Toggle line numbers
   1 >>> 
   2 logout: 0
   3 logout: 0
   4 userswitch:  1
   5 suspend   :  0
   6 idle  : 0

ref.1. "Bit Twiddling Hacks" By Sean Eron Anderson

ref.2. "The Art of Assembly Language" by Randall Hyde

ref.3. Hacker's Delight

[Dec 07, 2017] - Understanding Python variables and Memory Management

Dec 07, 2017 |

Understanding Python variables and Memory Management Jul 08, 2012

Have you ever noticed any difference between variables in Python and C? For example, when you do an assignment like the following in C, it actually creates a block of memory space so that it can hold the value for that variable.

int a = 1;

You can think of it as putting the value assigned in a box with the variable name as shown below.

int a =1;

And for all the variables you create a new box is created with the variable name to hold the value. If you change the value of the variable the box will be updated with the new value. That means doing

a = 2;

will result in

a = 2;

Assigning one variable to another makes a copy of the value and put that value in the new box.

int b = a;

b=2 a = 2

But in Python variables work more like tags unlike the boxes you have seen before. When you do an assignment in Python, it tags the value with the variable name.

a = 1

a = 1

and if you change the value of the varaible, it just changes the tag to the new value in memory. You dont need to do the housekeeping job of freeing the memory here. Python's Automatic Garbage Collection does it for you. When a value is without names/tags it is automatically removed from memory.

a = 2

a = 2 1

Assigning one variable to another makes a new tag bound to the same value as show below.

b = a

b = a

Other languages have 'variables'. Python has 'names'.

A bit about Python's memory management

As you have seen before, a value will have only one copy in memory and all the variables having this value will refer to this memory location. For example when you have variables a , b , c having a value 10, it doesn't mean that there will be 3 copy of 10 s in memory. There will be only one 10 and all the variables a , b , c will point to this value. Once a variable is updated, say you are doing a += 1 a new value 11 will be allocated in memory and a will be pointing to this.

Let's check this behaviour with Python Interpreter. Start the Python Shell and try the following for yourselves.

>>> a = 10
>>> b = 10
>>> c = 10
>>> id(a), id(b), id(c)
(140621897573616, 140621897573616, 140621897573616)
>>> a += 1
>>> id(a)

id() will return an objects memory address (object's identity). As you have noticed, when you assign the same integer value to the variables, we see the same ids. But this assumption does not hold true all the time. See the following for example

>>> x = 500
>>> y = 500
>>> id(x)
>>> id(y)

What happened here? Even after assigning the same integer values to different variable names, we are getting two different ids here. These are actually the effects of CPython optimization we are observing here. CPython implementation keeps an array of integer objects for all integers between -5 and 256. So when we create an integer in that range, they simply back reference to the existing object. You may refer the following links for more information.

Stack Overflow: "is" operator behaves unexpectedly with integers

Let's take a look at strings now.

>>> s1 = 'hello'
>>> s2 = 'hello'
>>> id(s1), id(s2)
(4454725888, 4454725888)
>>> s1 == s2
>>> s1 is s2
>>> s3 = 'hello, world!'
>>> s4 = 'hello, world!'
>>> id(s3), id(s4)
(4454721608, 4454721664)
>>> s3 == s4
>>> s3 is s4

Looks interesting, isn't it? When the string was a simple and shorter one the variable names where referring to the same object in memory. But when they became bigger, this was not the case. This is called interning, and Python does interning (to some extent) of shorter string literals (as in s1 and s2 ) which are created at compile time. But in general, Python string literals creates a new string object each time (as in s3 and s4 ). Interning is runtime dependant and is always a trade-off between memory use and the cost of checking if you are creating the same string. There's a built-in intern() function to forcefully apply interning. Read more about interning from the following links.

Stack Overflow: Does Python intern Strings?
Stack Overflow: Python String Interning
Internals of Python String Interning

Now we will try to create custom objects and try to find their identities.

>>> class Foo:
...     pass
>>> bar = Foo()
>>> baz = Foo()
>>> id(bar)
>>> id(baz)

As you can see, the two instances have different identities. That means, there are two different copies of the same object in memory. When you are creating custom objects, they will have unique identities unless you are using Singleton Pattern which overrides this behaviour (in __new__() ) by giving out the same instance upon instance creation.

Thanks to Jay Pablo (see comments) for correcting the mistakes and making this post a better one.

[Dec 07, 2017] In what structure is a Python object stored in memory - Stack Overflow

Dec 07, 2017 |

In what structure is a Python object stored in memory? [duplicate] Ask Question up vote down vote favorite 1

; ,Nov 1, 2010 at 4:34

Possible Duplicate:
How do I determine the size of an object in Python?

Say I have a class A:

class A(object):
    def __init__(self, x):
        self.x = x

    def __str__(self):
        return self.x

And I use sys.getsizeof to see how many bytes instance of A takes:

>>> sys.getsizeof(A(1))
>>> sys.getsizeof(A('a'))
>>> sys.getsizeof(A('aaa'))

As illustrated in the experiment above, the size of an A object is the same no matter what self.x is.

So I wonder how python store an object internally?

Björn Pollex ,Oct 31, 2010 at 10:38

This is certain to differ over python implementations. Which one are you talking about? – Björn Pollex Oct 31 '10 at 10:38

Thomas Wouters ,Oct 31, 2010 at 11:26

It depends on what kind of object, and also which Python implementation :-)

In CPython, which is what most people use when they use python , all Python objects are represented by a C struct, PyObject . Everything that 'stores an object' really stores a PyObject * . The PyObject struct holds the bare minimum information: the object's type (a pointer to another PyObject ) and its reference count (an ssize_t -sized integer.) Types defined in C extend this struct with extra information they need to store in the object itself, and sometimes allocate extra data separately.

For example, tuples (implemented as a PyTupleObject "extending" a PyObject struct) store their length and the PyObject pointers they contain inside the struct itself (the struct contains a 1-length array in the definition, but the implementation allocates a block of memory of the right size to hold the PyTupleObject struct plus exactly as many items as the tuple should hold.) The same way, strings ( PyStringObject ) store their length, their cached hashvalue, some string-caching ("interning") bookkeeping, and the actual char* of their data. Tuples and strings are thus single blocks of memory.

On the other hand, lists ( PyListObject ) store their length, a PyObject ** for their data and another ssize_t to keep track of how much room they allocated for the data. Because Python stores PyObject pointers everywhere, you can't grow a PyObject struct once it's allocated -- doing so may require the struct to move, which would mean finding all pointers and updating them. Because a list may need to grow, it has to allocate the data separately from the PyObject struct. Tuples and strings cannot grow, and so they don't need this. Dicts ( PyDictObject ) work the same way, although they store the key, the value and the cached hashvalue of the key, instead of just the items. Dict also have some extra overhead to accommodate small dicts and specialized lookup functions.

But these are all types in C, and you can usually see how much memory they would use just by looking at the C source. Instances of classes defined in Python rather than C are not so easy. The simplest case, instances of classic classes, is not so difficult: it's a PyObject that stores a PyObject * to its class (which is not the same thing as the type stored in the PyObject struct already), a PyObject * to its __dict__ attribute (which holds all other instance attributes) and a PyObject * to its weakreflist (which is used by the weakref module, and only initialized if necessary.) The instance's __dict__ is usually unique to the instance, so when calculating the "memory size" of such an instance you usually want to count the size of the attribute dict as well. But it doesn't have to be specific to the instance! __dict__ can be assigned to just fine.

New-style classes complicate manners. Unlike with classic classes, instances of new-style classes are not separate C types, so they do not need to store the object's class separately. They do have room for the __dict__ and weakreflist reference, but unlike classic instances they don't require the __dict__ attribute for arbitrary attributes. if the class (and all its baseclasses) use __slots__ to define a strict set of attributes, and none of those attributes is named __dict__ , the instance does not allow arbitrary attributes and no dict is allocated. On the other hand, attributes defined by __slots__ have to be stored somewhere . This is done by storing the PyObject pointers for the values of those attributes directly in the PyObject struct, much like is done with types written in C. Each entry in __slots__ will thus take up a PyObject * , regardless of whether the attribute is set or not.

All that said, the problem remains that since everything in Python is an object and everything that holds an object just holds a reference, it's sometimes very difficult to draw the line between objects. Two objects can refer to the same bit of data. They may hold the only two references to that data. Getting rid of both objects also gets rid of the data. Do they both own the data? Does only one of them, but if so, which one? Or would you say they own half the data, even though getting rid of one object doesn't release half the data? Weakrefs can make this even more complicated: two objects can refer to the same data, but deleting one of the objects may cause the other object to also get rid of its reference to that data, causing the data to be cleaned up after all.

Fortunately the common case is fairly easy to figure out. There are memory debuggers for Python that do a reasonable job at keeping track of these things, like heapy . And as long as your class (and its baseclasses) is reasonably simple, you can make an educated guess at how much memory it would take up -- especially in large numbers. If you really want to know the exact sizes of your datastructures, consult the CPython source; most builtin types are simple structs described in Include/<type>object.h and implemented in Objects/<type>object.c . The PyObject struct itself is described in Include/object.h . Just keep in mind: it's pointers all the way down; those take up room too.

satoru ,Oct 31, 2010 at 12:43

Thanks very much. In fact, I'm asking this question because I want to know what's stored in memcached when I invoke cache.set(key, obj) , is it some thing like a pickled object? – satoru Oct 31 '10 at 12:43

Thomas Wouters ,Oct 31, 2010 at 16:00

Oh, well! That's a completely different question. As I recall (and a quick glance at the source confirms), the memcache module stores pickled versions of the object, yes. It also creates a new pickler for each store, so storing two objects that refer to the same third object means the third object is pickled twice (unless your objects don't pickle that way, of course; you can define pickling exactly how you want.) In other words, the answer to your question is 'len(pickle.dumps(obj))' . – Thomas Wouters Oct 31 '10 at 16:00

tmthydvnprt ,Mar 13, 2016 at 13:31

For the graphically curious, I once tested and plotted this for multiple builtin types: Mar 13 '16 at 13:31

> ,

in the case of a new class instance getsizeof() return the size of a reference to PyObject which is returned by the C function PyInstance_New()

if you want a list of all the object size check this .

[Dec 05, 2017] python - Problems installing python3 on RHEL - Stack Overflow

Dec 05, 2017 |

gecco ,Nov 13, 2011 at 13:53

It is easy to install it manually:
  1. Download (there may be newer releases on ):
    $ wget
  2. Unzip
    $ tar xf Python-3.* 
    $ cd Python-3.*
  3. Prepare compilation
    $ ./configure
  4. Build
    $ make
  5. Install
    $ make install

    OR if you don't want to overwrite the python executable (safer, at least on some distros yum needs python to be 2.x, such as for RHEL6) - you can install python3.* as a concurrent instance to the system default with an altinstall :

    $ make altinstall

Now if you want an alternative installation directory, you can pass --prefix to the configure command.

Example: for 'installing' Python in /opt/local, just add --prefix=/opt/local .

After the make install step: In order to use your new Python installation, it could be, that you still have to add the [prefix]/bin to the $PATH and [prefix]/lib to the $LD_LIBRARY_PATH (depending of the --prefix you passed)

rajadhiraja ,Jul 9, 2012 at 17:58

You used: bzip2 -cd Python-3.2.2.tar.bz2 | tar xvf - This is also a simpler possibility: tar jxvf Python-3.2.2.tar.bz2 – rajadhiraja Jul 9 '12 at 17:58

dannysauer ,Oct 29, 2014 at 21:38

The bzip2 option to tar was -y on some early systems, before bzip2 was "officially" supported, and some systems that don't use GNU tar don't even have bzip2 support built-in (but may have bzip2 binaries). So depending on how portable things need to be, the bunzip2 -c command (or bzip2 -cd ) may be more portable. RHEL6, as in teh question, supports -j , so this is moot for the actual question. But for posterity... – dannysauer Oct 29 '14 at 21:38

Caleb ,Jan 8, 2015 at 20:39

I got a 301 (moved) into a 404 when using the bz2 tar. I changed it to .tgz and it downloaded fine. – Caleb Jan 8 '15 at 20:39

bnu ,Jun 3, 2016 at 13:10

if you get no acceptable C compiler found in $PATH when installing python refer to‌​compiler-found-in-pa‌​th-when-installing-p‌​ythonbnu Jun 3 '16 at 13:10

Searene ,Nov 20, 2016 at 3:44

./configure --with-ensurepip=install to enable pip3 , or you won't have pip3 installed after compilation. – Searene Nov 20 '16 at 3:44

Samuel Phan ,Apr 26, 2014 at 23:30

Installing from RPM is generally better, because: Solution 1: Red Hat & EPEL repositories

Red Hat has added Python 3.4 for CentOS 6 and 7 through the EPEL repository.


[EPEL] How to install Python 3.4 on CentOS 6 & 7
sudo yum install -y epel-release
sudo yum install -y python34

# Install pip3
sudo yum install -y python34-setuptools  # install easy_install-3.4
sudo easy_install-3.4 pip

# I guess you would like to install virtualenv or virtualenvwrapper
sudo pip3 install virtualenv
sudo pip3 install virtualenvwrapper

If you want to use pyvenv , you can do the following to install pip3 in your virtualenv:

pyvenv --without-pip my_env
curl | my_env/bin/python

But if you want to have it out-of-the-box, you can add this bash function (alias) in your .bashrc :

pyvenv() { /usr/bin/pyvenv --without-pip $@; for env in $@; do curl | "$env/bin/python"; done; }
Solution 2: IUS Community repositories

The IUS Community provides some up-to-date packages for RHEL & CentOS . The guys behind are from Rackspace, so I think that they are quite trustworthy...

Check the right repo for you here:

[IUS] How to install Python 3.5 on CentOS 6
sudo yum install -y
sudo yum install -y python35u python35u-pip

# I guess you would like to install virtualenv or virtualenvwrapper
sudo pip3.5 install virtualenv
sudo pip3.5 install virtualenvwrapper

Note: you have pyvenv-3.5 available out-of-the-box if you don't want to use virtualenv .

[IUS] How to install Python 3.5 on CentOS 7
sudo yum install -y
sudo yum install -y python35u python35u-pip

# I guess you would like to install virtualenv or virtualenvwrapper
sudo pip3.5 install virtualenv
sudo pip3.5 install virtualenvwrapper

Note: you have pyvenv-3.5 available out-of-the-box if you don't want to use virtualenv .

Samuel Phan ,Jul 3, 2015 at 14:54

Fixed the IUS release package URL. they have updated the version, that's all. If they update the package again, you can check the link to their RPM from the webpage. – Samuel Phan Jul 3 '15 at 14:54

Samuel Phan ,Sep 7, 2015 at 9:01

As I said, the link in your answer contains non-printable unicode characters. When I copy/paste your link, here is what I see in VIM:‌​u<200c><200b>s-relea‌​se-1.0-14.iu‌​s.cent‌​os6.noarch.rpm Here is the unicode character: The URL in my original answer works, I've just tested it. – Samuel Phan Sep 7 '15 at 9:01

Loïc ,Sep 30, 2015 at 13:48

Using this solution, how would you then install pip for python34 ? – Loïc Sep 30 '15 at 13:48

Samuel Phan ,Oct 1, 2015 at 21:11

Very good question, I added a comment for that. It's the best I found. If you want to stick to RPM-based installation, you should use IUS repositories for CentOS 7. They provide a python34u-pip . – Samuel Phan Oct 1 '15 at 21:11

ILMostro_7 ,May 5 at 2:27

easy_install pip3 should work--or a variation of it--to get pip3 installed without needing to curl a specific URL that may or may not be there (anymore). – ILMostro_7 May 5 at 2:27

rsc ,Jul 29, 2012 at 11:15

In addition to gecco's answer I would change step 3 from:


./configure --prefix=/opt/python3

Then after installation you could also:

# ln -s /opt/python3/bin/python3 /usr/bin/python3

It is to ensure that installation will not conflict with python installed with yum.

See explanation I have found on Internet:

cababunga ,Feb 12, 2013 at 19:45

Why /opt ? /usr/local specifically exists for this purpose and that's where ./configure with no explicit --prefix will place it. – cababunga Feb 12 '13 at 19:45

rsc ,Feb 13, 2013 at 11:27

@cababunga As I wrote I have been influenced by reading tutorial from specified site. Nevertheless installing python in above way may be usable - it would be a lot easier to uninstall it (it looks like uninstall target for make is not provided). Also you could easily install various versions of python3 in specified separate directories under /opt and manually set which one to use or test. – rsc Feb 13 '13 at 11:27

Caleb ,Jan 8, 2015 at 21:24

You may also want to set up your PATH to contain the binaries folder. For me it was export PATH=$PATH:/opt/python3/binCaleb Jan 8 '15 at 21:24

Paul Draper ,Jan 30, 2014 at 7:52

Use the SCL repos.
sudo sh -c 'wget -qO- >> /etc/yum.repos.d/scl.repo'
sudo yum install python33
scl enable python27

(This last command will have to be run each time you want to use python27 rather than the system default.)

snacks ,Sep 24, 2014 at 13:23

After reading the redhat docs what I needed to do was either; scl enable python33 bash to launch a new shell which will be enabled for python 3 or scl enable python33 'python' which will run your python file using python 3 in the current shell – snacks Sep 24 '14 at 13:23

Nathan Basanese ,Aug 24, 2015 at 21:46

// , What more generic instructions would also allow the installation of Python 3.4? – Nathan Basanese Aug 24 '15 at 21:46

Florian La Roche ,Feb 3, 2013 at 8:53

You can download a source RPMs and binary RPMs for RHEL6 / CentOS6 from here

This is a backport from the newest Fedora development source rpm to RHEL6 / CentOS6

cababunga ,Feb 12, 2013 at 19:40

That's great. Thanks for your effort, Florian. Maybe running createrepo on those directories would make them even more useful for some people. – cababunga Feb 12 '13 at 19:40

lyomi ,Mar 21, 2014 at 15:18

What a relief. the rpm installed perfectly. – lyomi Mar 21 '14 at 15:18

Nathan Basanese ,Sep 3, 2015 at 20:45

// , How do we make a repository from that link? – Nathan Basanese Sep 3 '15 at 20:45

Nathan Basanese ,Sep 3, 2015 at 21:07

// , I can confirm that this works. Hold on, I just whipped up something quick that used that URL as the baseurl : Basanese Sep 3 '15 at 21:07

rkuska ,Jul 16, 2015 at 7:58

Python3 was recently added to EPEL7 as Python34.

There is ongoing (currently) effort to make packaging guidelines about how to package things for Python3 in EPEL7.


Nathan Basanese ,Aug 24, 2015 at 21:57

// , What's the hold-up? Pip seems like the simple way to go. – Nathan Basanese Aug 24 '15 at 21:57

Mike Guerette ,Aug 27, 2015 at 13:33

Along with Python 2.7 and 3.3, Red Hat Software Collections now includes Python 3.4 - all work on both RHEL 6 and 7.

RHSCL 2.0 docs are at

Plus lot of articles at


Follow these instructions to install Python 3.4 on RHEL 6/7 or CentOS 6/7:
# 1. Install the Software Collections tools:
yum install scl-utils

# 2. Download a package with repository for your system.
#  (See the Yum Repositories on external link. For RHEL/CentOS 6:)
#  or for RHEL/CentOS 7

# 3. Install the repo package (on RHEL you will need to enable optional channel first):
yum install rhscl-rh-python34-*.noarch.rpm

# 4. Install the collection:
yum install rh-python34

# 5. Start using software collections:
scl enable rh-python34 bash

Nathan Basanese ,Dec 10, 2015 at 23:53

// , Doesn't this require us to enable a special shell? Combined with virtualenvs, I can see that becoming a pain in the ass. – Nathan Basanese Dec 10 '15 at 23:53

Nathan Basanese ,Dec 10, 2015 at 23:55

// , Why does this require scl enable rh-python34 bash ? What are the implications for using this later on? – Nathan Basanese Dec 10 '15 at 23:55

Searene ,Nov 20, 2016 at 2:53

Is there a way to install python3.5 on RedHat 6? I tried wget‌​5/epel-6-x86_64/down‌​load/rhscl-rh-python‌​35-epel-6-x86_64.noa‌​rch.rpm , but it was not found. – Searene Nov 20 '16 at 2:53

daneel ,Apr 2, 2015 at 14:12

If you want official RHEL packages you can use RHSCL (Red Hat Software Collections)

More details:

You have to have access to Red Hat Customer Portal to read full articles.

Nathan Basanese ,Aug 24, 2015 at 21:55

// , Just upvoted. Would you be willing to make a summary of what one does to use the RHSCL for this? This is a question and answer site, after all. – Nathan Basanese Aug 24 '15 at 21:55

amphibient ,Feb 8 at 17:12

yum install python34.x86_64 works if you have epel-release installed, which this answer explains how to, and I confirmed it worked on RHEL 7.3
$ cat /etc/*-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)

$ type python3
python3 is hashed (/usr/bin/python3)

Aty ,Feb 11 at 20:47

Here are the steps i followed to install Python3:

yum install wget


sudo tar xvf Python-3.*

cd Python-3.*

sudo ./configure --prefix=/opt/python3

sudo make

sudo make install

sudo ln -s /opt/python3/bin/python3 /usr/bin/python3

$ /usr/bin/python3

Python 3.6.0

Nagev ,Mar 6 at 18:21

Three steps using Python 3.5 by Software Collections :
sudo yum install centos-release-scl
sudo yum install rh-python35
scl enable rh-python35 bash

Note that sudo is not needed for the last command. Now we can see that python 3 is the default for the current shell:

python --version
Python 3.5.1

Simply skip the last command if you'd rather have Python 2 as the default for the current shell.

Maxime Martineau ,May 10 at 18:02

For RHEL on Amazon Linux, using python3 I had to do :

sudo yum install python34-devel

[Dec 05, 2017] How to Install Latest Python 3.6 Version in Linux

Dec 05, 2017 |

Although we can install the core packages and their dependencies using yum and aptitude (or apt-get ), we will explain how to perform the installation from source instead.

Why? The reason is simple: this allows us to have the latest stable release of the language ( 3.6 ) and to provide a distribution-agnostic installation method.

Prior to installing Python in CentOS 7, let's make sure our system has all the necessary development dependencies:

# yum -y groupinstall development# yum -y install zlib-devel

In Debian we will need to install gcc, make, and the zlib compression / decompression library:

# aptitude -y install gcc make zlib1g-dev

To install Python 3.6 , run the following commands:

# wget
# tar xJf Python-3.6.3.tar.xz
# cd Python-3.6.3
# ./configure
# make
# make install

[Dec 03, 2017] Perl index function equivalent in Python

Notable quotes:
"... string.find(s, sub[, start[, end]]) Return the lowest index in s where the substring sub is found such that sub is wholly contained in s[start:end]. Return -1 on failure. Defaults for start and end and interpretation of negative values is the same as for slices. ..."
Dec 03, 2017 |

Syed Mustafa Zinoor ,Mar 4, 2015 at 15:51

The index() in Perl returns the location of a text between a start point and an endpoint. Is there something similar in Python. If not how can this be implemented.

Example : in Perl, I would write an index function to return the index of a string as follows

start = index(input_text,text_to_search,starting_point_of_search)+off_set_length

What should be the equivalent in Python?

Kasramvd ,Mar 4, 2015 at 15:54

In python you can use str.find() to find the index of a sub-string inside a string :
>>> s
'123string 1abcabcstring 2123string 3abc123stringnabc'

>>> s.find('3a')

string.find(s, sub[, start[, end]]) Return the lowest index in s where the substring sub is found such that sub is wholly contained in s[start:end]. Return -1 on failure. Defaults for start and end and interpretation of negative values is the same as for slices.

[Nov 23, 2017] Learning Python, 5th Edition.pdf - Google Drive

Nov 23, 2017 |

Learning Python, Fifth Edition

by Mark Lutz

Copyright © 2013 Mark Lutz. All rights reserved.

Printed in the United States of America.

Published by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.

O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions
are also available for most titles ( ). For more information, contact our
corporate/institutional sales department: 800-998-9938 or

Editor: Rachel Roumeliotis Indexer: Lucie I laskins

Production Editor: Christopher 1 learse Cover Designer: Randy Comer

Copyeditor: Rachel Monaghan Interior Designer: David Futato

Proofreader: Julie Van Keuren Illustrator: Rebecca Demarcst

June 2013: Fifth Edition.

Revision History for the Fifth Edition:

2013-06-07 First release

See for release details.

[Nov 19, 2017] Think Python by Allen B. Downey

From Amazon review: " This is a wonderfully written book. Having programmed for several decades, I was surprised by how much I enjoyed a introductory programming book. This book blends in concepts of how to solve problems while introducing python. The progression of python was done excellently with non-trivial insightful examples."
Nov 19, 2017 |
  1. Preface
  2. The way of the program
  3. Variables, expressions and statements
  4. Functions
  5. Case study: interface design
  6. Conditionals and recursion
  7. Fruitful functions
  8. Iteration
  9. Strings
  10. Case study: word play
  11. Lists
  12. Dictionaries
  13. Tuples
  14. Case study: data structure selection
  15. Files
  16. Classes and objects
  17. Classes and functions
  18. Classes and methods
  19. Inheritance
  20. The Goodies
  21. Debugging
  22. Analysis of Algorithms
Amazon review:

DrewOJensen, November 29, 2013

Not just for Python beginners

Where was this book when I was taking college programming classes! I have to start off in saying that if you're a beginner in programming, this book is phenomenal. Allen explains the basics very clearly and thoroughly. I'd have to say this book is half about beginner programming and half on Python. As an FYI, this book is good for many basic principles of Python but if you're looking for anything more than just that, I'd recommend Learning Python, 5th Edition by Mark Lutz.

I bought this book for a new job that I took. I minored in CS and wish I would have had this book as my first programming book. I was attracted to it because I needed to learn Python (for work) and all of the guys use the Learning Python for reference. I figured why not start from the beginning and work my way there.

As far as the progression of the book, it moves pretty quickly. You have to stay on your toes with the examples. Having been exposed to a bit of Python before reading, I was able to keep up with the examples just in my head for a little while but as the book moved on, I was doing them in a console. I also think the flow of the book and how Allen moves from topic to topic keeps things cohesive quite well.

Overall, very well executed book and Allen assumes the reader has no experience in programming. Great book!

Update 1/20/14:

After finishing the book I wanted to write a follow up. I have to say that I stand by my initial review and rating! It has been a huge help in getting me up to speed. There are a few specific things that I would like to address.

In regards to the basic principles of Python, this book had done a very good job at balancing what you need to know vs what you can know. It was good to be reminded that this book is a beginner book. I ended up looking up more details and specifics of certain functions and methods mostly because I had specific requirements that I needed to perform with them. This can't be faulted on the author.

As I had mentioned in my first review, if you're looking for more specifics, Learning Python, 5th Edition by Mark Lutz is a great tool. I've borrowed a coworkers copy and will be getting one of my own soon.

I cannot speak on behalf of the database content since I skipped over that section and have no experience doing database/structure.

Otherwise still very good book. I enjoyed being challenged as I read the examples and I like how it wasn't just a "finish what I've shown you" type of examples, but the author said, "Ok, I showed you mostly how to do it this way, and you finished it in another example, now do it a completely different way with what we just discussed."

[Nov 16, 2017] Python coroutines

Notable quotes:
"... coroutine function ..."
"... coroutine object ..."
"... *coros_or_futures ..."
"... return_exceptions=False ..."
"... return_exceptions ..."
"... return_when=ALL_COMPLETED ..."
Nov 16, 2017 |

Coroutines used with asyncio may be implemented using the async def statement, or by using generators . The async def type of coroutine was added in Python 3.5, and is recommended if there is no need to support older Python versions.

Generator-based coroutines should be decorated with @asyncio.coroutine , although this is not strictly enforced. The decorator enables compatibility with async def coroutines, and also serves as documentation. Generator-based coroutines use the yield from syntax introduced in PEP 380 , instead of the original yield syntax.

The word "coroutine", like the word "generator", is used for two different (though related) concepts:

Things a coroutine can do:

Calling a coroutine does not start its code running – the coroutine object returned by the call doesn't do anything until you schedule its execution. There are two basic ways to start it running: call await coroutine or yield from coroutine from another coroutine (assuming the other coroutine is already running!), or schedule its execution using the ensure_future() function or the AbstractEventLoop.create_task() method.

Coroutines (and tasks) can only run when the event loop is running.

@asyncio. coroutine
Decorator to mark generator-based coroutines. This enables the generator use yield from to call async def coroutines, and also enables the generator to be called by async def coroutines, for instance using an await expression.

There is no need to decorate async def coroutines themselves.

If the generator is not yielded from before it is destroyed, an error message is logged. See Detect coroutines never scheduled .


In this documentation, some methods are documented as coroutines, even if they are plain Python functions returning a Future . This is intentional to have a freedom of tweaking the implementation of these functions in the future. If such a function is needed to be used in a callback-style code, wrap its result with ensure_future() . Example: Hello World coroutine

Example of coroutine displaying "Hello World" :

import asyncio

async def hello_world():
    print("Hello World!")

loop = asyncio.get_event_loop()
# Blocking call which returns when the hello_world() coroutine is done

See also

The Hello World with call_soon() example uses the AbstractEventLoop.call_soon() method to schedule a callback. Example: Coroutine displaying the current date

Example of coroutine displaying the current date every second during 5 seconds using the sleep() function:

import asyncio
import datetime

async def display_date(loop):
    end_time = loop.time() + 5.0
    while True:
        if (loop.time() + 1.0) >= end_time:
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()
# Blocking call which returns when the display_date() coroutine is done

See also

The display the current date with call_later() example uses a callback with the AbstractEventLoop.call_later() method. Example: Chain coroutines

Example chaining coroutines:

import asyncio

async def compute(x, y):
    print("Compute %s + %s ..." % (x, y))
    await asyncio.sleep(1.0)
    return x + y

async def print_sum(x, y):
    result = await compute(x, y)
    print("%s + %s = %s" % (x, y, result))

loop = asyncio.get_event_loop()
loop.run_until_complete(print_sum(1, 2))

compute() is chained to print_sum() : print_sum() coroutine waits until compute() is completed before returning its result.

Sequence diagram of the example:


The "Task" is created by the AbstractEventLoop.run_until_complete() method when it gets a coroutine object instead of a task.

The diagram shows the control flow, it does not describe exactly how things work internally. For example, the sleep coroutine creates an internal future which uses AbstractEventLoop.call_later() to wake up the task in 1 second. InvalidStateError

exception asyncio. InvalidStateError
The operation is not allowed in this state. TimeoutError
exception asyncio. TimeoutError
The operation exceeded the given deadline.


This exception is different from the builtin TimeoutError exception! Future

class asyncio. Future * , loop=None
This class is almost compatible with concurrent.futures.Future .


This class is not thread safe .

cancel ()
Cancel the future and schedule callbacks.

If the future is already done or cancelled, return False . Otherwise, change the future's state to cancelled, schedule the callbacks and return True .

cancelled ()
Return True if the future was cancelled.
done ()
Return True if the future is done.

Done means either that a result / exception are available, or that the future was cancelled.

result ()
Return the result this future represents.

If the future has been cancelled, raises CancelledError . If the future's result isn't yet available, raises InvalidStateError . If the future is done and has an exception set, this exception is raised.

exception ()
Return the exception that was set on this future.

The exception (or None if no exception was set) is returned only if the future is done. If the future has been cancelled, raises CancelledError . If the future isn't done yet, raises InvalidStateError .

add_done_callback fn
Add a callback to be run when the future becomes done.

The callback is called with a single argument - the future object. If the future is already done when this is called, the callback is scheduled with call_soon() .

Use functools.partial to pass parameters to the callback . For example, fut.add_done_callback(functools.partial(print, "Future:", flush=True)) will call print("Future:", fut, flush=True) .

remove_done_callback fn
Remove all instances of a callback from the "call when done" list.

Returns the number of callbacks removed.

set_result result
Mark the future done and set its result.

If the future is already done when this method is called, raises InvalidStateError .

set_exception exception
Mark the future done and set an exception.

If the future is already done when this method is called, raises InvalidStateError . Example: Future with run_until_complete()

Example combining a Future and a coroutine function :

import asyncio

async def slow_operation(future):
    await asyncio.sleep(1)
    future.set_result('Future is done!')

loop = asyncio.get_event_loop()
future = asyncio.Future()

The coroutine function is responsible for the computation (which takes 1 second) and it stores the result into the future. The run_until_complete() method waits for the completion of the future.


The run_until_complete() method uses internally the add_done_callback() method to be notified when the future is done. Example: Future with run_forever()

The previous example can be written differently using the Future.add_done_callback() method to describe explicitly the control flow:

import asyncio

async def slow_operation(future):
    await asyncio.sleep(1)
    future.set_result('Future is done!')

def got_result(future):

loop = asyncio.get_event_loop()
future = asyncio.Future()

In this example, the future is used to link slow_operation() to got_result() : when slow_operation() is done, got_result() is called with the result. Task

class asyncio. Task coro , * , loop=None
Schedule the execution of a coroutine : wrap it in a future. A task is a subclass of Future .

A task is responsible for executing a coroutine object in an event loop. If the wrapped coroutine yields from a future, the task suspends the execution of the wrapped coroutine and waits for the completion of the future. When the future is done, the execution of the wrapped coroutine restarts with the result or the exception of the future.

Event loops use cooperative scheduling: an event loop only runs one task at a time. Other tasks may run in parallel if other event loops are running in different threads. While a task waits for the completion of a future, the event loop executes a new task.

The cancellation of a task is different from the cancelation of a future. Calling cancel() will throw a CancelledError to the wrapped coroutine. cancelled() only returns True if the wrapped coroutine did not catch the CancelledError exception, or raised a CancelledError exception.

If a pending task is destroyed, the execution of its wrapped coroutine did not complete. It is probably a bug and a warning is logged: see Pending task destroyed .

Don't directly create Task instances: use the ensure_future() function or the AbstractEventLoop.create_task() method.

This class is not thread safe .

classmethod all_tasks loop=None
Return a set of all tasks for an event loop.

By default all tasks for the current event loop are returned.

classmethod current_task loop=None
Return the currently running task in an event loop or None .

By default the current task for the current event loop is returned.

None is returned when called not in the context of a Task .

cancel ()
Request that this task cancel itself.

This arranges for a CancelledError to be thrown into the wrapped coroutine on the next cycle through the event loop. The coroutine then has a chance to clean up or even deny the request using try/except/finally.

Unlike Future.cancel() , this does not guarantee that the task will be cancelled: the exception might be caught and acted upon, delaying cancellation of the task or preventing cancellation completely. The task may also return a value or raise a different exception.

Immediately after this method is called, cancelled() will not return True (unless the task was already cancelled). A task will be marked as cancelled when the wrapped coroutine terminates with a CancelledError exception (even if cancel() was not called).

get_stack * , limit=None
Return the list of stack frames for this task's coroutine.

If the coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames.

The frames are always ordered from oldest to newest.

The optional limit gives the maximum number of frames to return; by default all available frames are returned. Its meaning differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)

For reasons beyond our control, only one stack frame is returned for a suspended coroutine.

print_stack * , limit=None , file=None
Print the stack or traceback for this task's coroutine.

This produces output similar to that of the traceback module, for the frames retrieved by get_stack(). The limit argument is passed to get_stack(). The file argument is an I/O stream to which the output is written; by default output is written to sys.stderr. Example: Parallel execution of tasks

Example executing 3 tasks (A, B, C) in parallel:

import asyncio

async def factorial(name, number):
    f = 1
    for i in range(2, number+1):
        print("Task %s: Compute factorial(%s)..." % (name, i))
        await asyncio.sleep(1)
        f *= i
    print("Task %s: factorial(%s) = %s" % (name, number, f))

loop = asyncio.get_event_loop()
    factorial("A", 2),
    factorial("B", 3),
    factorial("C", 4),


Task A: Compute factorial(2)...
Task B: Compute factorial(2)...
Task C: Compute factorial(2)...
Task A: factorial(2) = 2
Task B: Compute factorial(3)...
Task C: Compute factorial(3)...
Task B: factorial(3) = 6
Task C: Compute factorial(4)...
Task C: factorial(4) = 24

A task is automatically scheduled for execution when it is created. The event loop stops when all tasks are done. Task functions


In the functions below, the optional loop argument allows explicitly setting the event loop object used by the underlying task or coroutine. If it's not provided, the default event loop is used.

asyncio. as_completed fs , * , loop=None , timeout=None
Return an iterator whose values, when waited for, are Future instances.

Raises asyncio.TimeoutError if the timeout occurs before all Futures are done.


for f in as_completed(fs):
    result = yield from f  # The 'yield from' may raise
    # Use result


The futures are not necessarily members of fs.

asyncio. ensure_future coro_or_future , * , loop=None
Schedule the execution of a coroutine object : wrap it in a future. Return a Task object.

If the argument is a Future , it is returned directly.

New in version 3.4.4. Changed in version 3.5.1: The function accepts any awaitable object.

See also

The AbstractEventLoop.create_task() method.

asyncio. async coro_or_future , * , loop=None
A deprecated alias to ensure_future() . Deprecated since version 3.4.4.
asyncio. wrap_future future , * , loop=None
Wrap a concurrent.futures.Future object in a Future object.
asyncio. gather *coros_or_futures , loop=None , return_exceptions=False
Return a future aggregating results from the given coroutine objects or futures.

All futures must share the same event loop. If all the tasks are done successfully, the returned future's result is the list of results (in the order of the original sequence, not necessarily the order of results arrival). If return_exceptions is true, exceptions in the tasks are treated the same as successful results, and gathered in the result list; otherwise, the first raised exception will be immediately propagated to the returned future.

Cancellation: if the outer Future is cancelled, all children (that have not completed yet) are also cancelled. If any child is cancelled, this is treated as if it raised CancelledError – the outer Future is not cancelled in this case. (This is to prevent the cancellation of one child to cause other children to be cancelled.)

asyncio. iscoroutine obj
Return True if obj is a coroutine object , which may be based on a generator or an async def coroutine.
asyncio. iscoroutinefunction func
Return True if func is determined to be a coroutine function , which may be a decorated generator function or an async def function.
asyncio. run_coroutine_threadsafe coro , loop
Submit a coroutine object to a given event loop.

Return a concurrent.futures.Future to access the result.

This function is meant to be called from a different thread than the one where the event loop is running. Usage:

# Create a coroutine
coro = asyncio.sleep(1, result=3)
# Submit the coroutine to a given loop
future = asyncio.run_coroutine_threadsafe(coro, loop)
# Wait for the result with an optional timeout argument
assert future.result(timeout) == 3

If an exception is raised in the coroutine, the returned future will be notified. It can also be used to cancel the task in the event loop:

    result = future.result(timeout)
except asyncio.TimeoutError:
    print('The coroutine took too long, cancelling the task...')
except Exception as exc:
    print('The coroutine raised an exception: {!r}'.format(exc))
    print('The coroutine returned: {!r}'.format(result))

See the concurrency and multithreading section of the documentation.


Unlike other functions from the module, run_coroutine_threadsafe() requires the loop argument to be passed explicitly. New in version 3.5.1.

coroutine asyncio. sleep delay , result=None , * , loop=None
Create a coroutine that completes after a given time (in seconds). If result is provided, it is produced to the caller when the coroutine completes.

The resolution of the sleep depends on the granularity of the event loop .

This function is a coroutine .

asyncio. shield arg , * , loop=None
Wait for a future, shielding it from cancellation.

The statement:

res = yield from shield(something())

is exactly equivalent to the statement:

res = yield from something()

except that if the coroutine containing it is cancelled, the task running in something() is not cancelled. From the point of view of something() , the cancellation did not happen. But its caller is still cancelled, so the yield-from expression still raises CancelledError . Note: If something() is cancelled by other means this will still cancel shield() .

If you want to completely ignore cancellation (not recommended) you can combine shield() with a try/except clause, as follows:

    res = yield from shield(something())
except CancelledError:
    res = None
coroutine asyncio. wait futures , * , loop=None , timeout=None , return_when=ALL_COMPLETED
Wait for the Futures and coroutine objects given by the sequence futures to complete. Coroutines will be wrapped in Tasks. Returns two sets of Future : (done, pending).

The sequence futures must not be empty.

timeout can be used to control the maximum number of seconds to wait before returning. timeout can be an int or float. If timeout is not specified or None , there is no limit to the wait time.

return_when indicates when this function should return. It must be one of the following constants of the concurrent.futures module:

Constant Description
FIRST_COMPLETED The function will return when any future finishes or is cancelled.
FIRST_EXCEPTION The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED .
ALL_COMPLETED The function will return when all futures finish or are cancelled.

This function is a coroutine .


done, pending = yield from asyncio.wait(fs)


This does not raise asyncio.TimeoutError ! Futures that aren't done when the timeout occurs are returned in the second set.

coroutine asyncio. wait_for fut , timeout , * , loop=None
Wait for the single Future or coroutine object to complete with timeout. If timeout is None , block until the future completes.

Coroutine will be wrapped in Task .

Returns result of the Future or coroutine. When a timeout occurs, it cancels the task and raises asyncio.TimeoutError . To avoid the task cancellation, wrap it in shield() .

If the wait is cancelled, the future fut is also cancelled.

This function is a coroutine , usage:

result = yield from asyncio.wait_for(fut, 60.0)
Changed in version 3.4.3: If the wait is cancelled, the future fut is now also cancelled.

[Nov 16, 2017] Effective Python Item 40 Consider Coroutines to Run Many Functions Concurrently

Nov 16, 2017 |

Threads give Python programmers a way to run multiple functions seemingly at the same time (see Item 37: "Use Threads for Blocking I/O, Avoid for Parallelism"). But there are three big problems with threads:

Python can work around all these issues with coroutines . Coroutines let you have many seemingly simultaneous functions in your Python programs. They're implemented as an extension to generators. The cost of starting a generator coroutine is a function call. Once active, they each use less than 1 KB of memory until they're exhausted.

Coroutines work by enabling the code consuming a generator to send a value back into the generator function after each yield expression. The generator function receives the value passed to the send function as the result of the corresponding yield expression.

def my_coroutine():
    while True:
        received = yield
        print('Received:', received)

it = my_coroutine()
next(it)             # Prime the coroutine

Received: First
Received: Second

The initial call to next is required to prepare the generator for receiving the first send by advancing it to the first yield expression. Together, yield and send provide generators with a standard way to vary their next yielded value in response to external input.

For example, say you want to implement a generator coroutine that yields the minimum value it's been sent so far. Here, the bare yield prepares the coroutine with the initial minimum value sent in from the outside. Then the generator repeatedly yields the new minimum in exchange for the next value to consider.

def minimize():
    current = yield
    while True:
        value = yield current
        current = min(value, current)

The code consuming the generator can run one step at a time and will output the minimum value seen after each input.

it = minimize()
next(it)            # Prime the generator


The generator function will seemingly run forever, making forward progress with each new call to send . Like threads, coroutines are independent functions that can consume inputs from their environment and produce resulting outputs. The difference is that coroutines pause at each yield expression in the generator function and resume after each call to send from the outside. This is the magical mechanism of coroutines.

This behavior allows the code consuming the generator to take action after each yield expression in the coroutine. The consuming code can use the generator's output values to call other functions and update data structures. Most importantly, it can advance other generator functions until their next yield expressions. By advancing many separate generators in lockstep, they will all seem to be running simultaneously, mimicking the concurrent behavior of Python threads.

The Game of Life

Let me demonstrate the simultaneous behavior of coroutines with an example. Say you want to use coroutines to implement Conway's Game of Life. The rules of the game are simple. You have a two-dimensional grid of an arbitrary size. Each cell in the grid can either be alive or empty.

ALIVE = '*'
EMPTY = '-'

The game progresses one tick of the clock at a time. At each tick, each cell counts how many of its neighboring eight cells are still alive. Based on its neighbor count, each cell decides if it will keep living, die, or regenerate. Here's an example of a 5×5 Game of Life grid after four generations with time going to the right. I'll explain the specific rules further below.

  0   |   1   |   2   |   3   |   4
----- | ----- | ----- | ----- | -----
-*--- | --*-- | --**- | --*-- | -----
--**- | --**- | -*--- | -*--- | -**--
---*- | --**- | --**- | --*-- | -----
----- | ----- | ----- | ----- | -----

I can model this game by representing each cell as a generator coroutine running in lockstep with all the others.

To implement this, first I need a way to retrieve the status of neighboring cells. I can do this with a coroutine named count_neighbors that works by yielding Query objects. The Query class I define myself. Its purpose is to provide the generator coroutine with a way to ask its surrounding environment for information.

Query = namedtuple('Query', ('y', 'x'))

The coroutine yields a Query for each neighbor. The result of each yield expression will be the value ALIVE or EMPTY . That's the interface contract I've defined between the coroutine and its consuming code. The count_neighbors generator sees the neighbors' states and returns the count of living neighbors.

def count_neighbors(y, x):
    n_ = yield Query(y + 1, x + 0)  # North
    ne = yield Query(y + 1, x + 1)  # Northeast
    # Define e_, se, s_, sw, w_, nw ...
    # ...
    neighbor_states = [n_, ne, e_, se, s_, sw, w_, nw]
    count = 0
    for state in neighbor_states:
        if state == ALIVE:
            count += 1
    return count

I can drive the count_neighbors coroutine with fake data to test it. Here, I show how Query objects will be yielded for each neighbor. count_neighbors expects to receive cell states corresponding to each Query through the coroutine's send method. The final count is returned in the StopIteration exception that is raised when the generator is exhausted by the return statement.

it = count_neighbors(10, 5)
q1 = next(it)                  # Get the first query
print('First yield: ', q1)
q2 = it.send(ALIVE)            # Send q1 state, get q2
print('Second yield:', q2)
q3 = it.send(ALIVE)            # Send q2 state, get q3
# ...
    count = it.send(EMPTY)     # Send q8 state, retrieve count
except StopIteration as e:
    print('Count: ', e.value)  # Value from return statement
First yield:  Query(y=11, x=5)
Second yield: Query(y=11, x=6)
Count:  2

Now I need the ability to indicate that a cell will transition to a new state in response to the neighbor count that it found from count_neighbors . To do this, I define another coroutine called step_cell . This generator will indicate transitions in a cell's state by yielding Transition objects. This is another class that I define, just like the Query class.

Transition = namedtuple('Transition', ('y', 'x', 'state'))

The step_cell coroutine receives its coordinates in the grid as arguments. It yields a Query to get the initial state of those coordinates. It runs count_neighbors to inspect the cells around it. It runs the game logic to determine what state the cell should have for the next clock tick. Finally, it yields a Transition object to tell the environment the cell's next state.

def game_logic(state, neighbors):
    # ...

def step_cell(y, x):
    state = yield Query(y, x)
    neighbors = yield from count_neighbors(y, x)
    next_state = game_logic(state, neighbors)
    yield Transition(y, x, next_state)

Importantly, the call to count_neighbors uses the yield from expression. This expression allows Python to compose generator coroutines together, making it easy to reuse smaller pieces of functionality and build complex coroutines from simpler ones. When count_neighbors is exhausted, the final value it returns (with the return statement) will be passed to step_cell as the result of the yield from expression.

Now, I can finally define the simple game logic for Conway's Game of Life. There are only three rules.

def game_logic(state, neighbors):
    if state == ALIVE:
        if neighbors < 2:
            return EMPTY     # Die: Too few
        elif neighbors > 3:
            return EMPTY     # Die: Too many
        if neighbors == 3:
            return ALIVE     # Regenerate
    return state

I can drive the step_cell coroutine with fake data to test it.

it = step_cell(10, 5)
q0 = next(it)           # Initial location query
print('Me:      ', q0)
q1 = it.send(ALIVE)     # Send my status, get neighbor query
print('Q1:      ', q1)
# ...
t1 = it.send(EMPTY)     # Send for q8, get game decision
print('Outcome: ', t1)

Me:       Query(y=10, x=5)
Q1:       Query(y=11, x=5)
Outcome:  Transition(y=10, x=5, state='-')

The goal of the game is to run this logic for a whole grid of cells in lockstep. To do this, I can further compose the step_cell coroutine into a simulate coroutine. This coroutine progresses the grid of cells forward by yielding from step_cell many times. After progressing every coordinate, it yields a TICK object to indicate that the current generation of cells have all transitioned.

TICK = object()

def simulate(height, width):
    while True:
        for y in range(height):
            for x in range(width):
                yield from step_cell(y, x)
        yield TICK

What's impressive about simulate is that it's completely disconnected from the surrounding environment. I still haven't defined how the grid is represented in Python objects, how Query , Transition , and TICK values are handled on the outside, nor how the game gets its initial state. But the logic is clear. Each cell will transition by running step_cell . Then the game clock will tick. This will continue forever, as long as the simulate coroutine is advanced.

This is the beauty of coroutines. They help you focus on the logic of what you're trying to accomplish. They decouple your code's instructions for the environment from the implementation that carries out your wishes. This enables you to run coroutines seemingly in parallel. This also allows you to improve the implementation of following those instructions over time without changing the coroutines.

Now, I want to run simulate in a real environment. To do that, I need to represent the state of each cell in the grid. Here, I define a class to contain the grid:

class Grid(object):
    def __init__(self, height, width):
        self.height = height
        self.width = width
        self.rows = []
        for _ in range(self.height):
            self.rows.append([EMPTY] * self.width)

    def __str__(self):
        # ...

The grid allows you to get and set the value of any coordinate. Coordinates that are out of bounds will wrap around, making the grid act like infinite looping space.

    def query(self, y, x):
        return self.rows[y % self.height][x % self.width]

    def assign(self, y, x, state):
        self.rows[y % self.height][x % self.width] = state

At last, I can define the function that interprets the values yielded from simulate and all of its interior coroutines. This function turns the instructions from the coroutines into interactions with the surrounding environment. It progresses the whole grid of cells forward a single step and then returns a new grid containing the next state.

def live_a_generation(grid, sim):
    progeny = Grid(grid.height, grid.width)
    item = next(sim)
    while item is not TICK:
        if isinstance(item, Query):
            state = grid.query(item.y, item.x)
            item = sim.send(state)
        else:  # Must be a Transition
            progeny.assign(item.y, item.x, item.state)
            item = next(sim)
    return progeny

To see this function in action, I need to create a grid and set its initial state. Here, I make a classic shape called a glider.

grid = Grid(5, 9)
grid.assign(0, 3, ALIVE)
# ...


Now I can progress this grid forward one generation at a time. You can see how the glider moves down and to the right on the grid based on the simple rules from the game_logic function.

class ColumnPrinter(object):
    # ...

columns = ColumnPrinter()
sim = simulate(grid.height, grid.width)
for i in range(5):
    grid = live_a_generation(grid, sim)


    0     |     1     |     2     |     3     |     4
---*----- | --------- | --------- | --------- | ---------
----*---- | --*-*---- | ----*---- | ---*----- | ----*----
--***---- | ---**---- | --*-*---- | ----**--- | -----*---
--------- | ---*----- | ---**---- | ---**---- | ---***---
--------- | --------- | --------- | --------- | ---------

The best part about this approach is that I can change the game_logic function without having to update the code that surrounds it. I can change the rules or add larger spheres of influence with the existing mechanics of Query , Transition , and TICK . This demonstrates how coroutines enable the separation of concerns, which is an important design principle.

Coroutines in Python 2

Unfortunately, Python 2 is missing some of the syntactical sugar that makes coroutines so elegant in Python 3. There are two limitations. First, there is no yield from expression. That means that when you want to compose generator coroutines in Python 2, you need to include an additional loop at the delegation point.

# Python 2
def delegated():
    yield 1
    yield 2

def composed():
    yield 'A'
    for value in delegated():  # yield from in Python 3
        yield value
    yield 'B'

print list(composed())

['A', 1, 2, 'B']

The second limitation is that there is no support for the return statement in Python 2 generators. To get the same behavior that interacts correctly with try / except / finally blocks, you need to define your own exception type and raise it when you want to return a value.

# Python 2
class MyReturn(Exception):
    def __init__(self, value):
        self.value = value

def delegated():
    yield 1
    raise MyReturn(2)  # return 2 in Python 3
    yield 'Not reached'

def composed():
        for value in delegated():
            yield value
    except MyReturn as e:
        output = e.value
    yield output * 4

print list(composed())

[1, 8]
Things to Remember

[Nov 16, 2017] Python generators and coroutines - Stack Overflow

Notable quotes:
"... Edit: I recommend using Greenlet . But if you're interested in a pure Python approach, read on. ..."
"... at a language level ..."
"... To anyone reading this in 2015 or later, the new syntax is 'yield from' ( PEP 380 ) and it allows true coroutines in Python >3.3 ..."
Nov 16, 2017 |

Python generators and coroutines Ask Question up vote down vote favorite 6

Giuseppe Maggiore ,May 10, 2011 at 10:25

I am studying coroutines and generators in various programming languages.

I was wondering if there is a cleaner way to combine together two coroutines implemented via generators than yielding back at the caller whatever the callee yields?

Let's say that we are using the following convention: all yields apart from the last one return null, while the last one returns the result of the coroutine. So, for example, we could have a coroutine that invokes another:

def A():
  # yield until a certain condition is met
  yield result

def B():
  # do something that may or may not yield
  x = bind(A())
  # ...
  return result

in this case I wish that through bind (which may or may not be implementable, that's the question) the coroutine B yields whenever A yields until A returns its final result, which is then assigned to x allowing B to continue.

I suspect that the actual code should explicitly iterate A so:

def B():
  # do something that may or may not yield
  for x in A(): ()
  # ...
  return result

which is a tad ugly and error prone...

PS: it's for a game where the users of the language will be the designers who write scripts (script = coroutine). Each character has an associated script, and there are many sub-scripts which are invoked by the main script; consider that, for example, run_ship invokes many times reach_closest_enemy, fight_with_closest_enemy, flee_to_allies, and so on. All these sub-scripts need to be invoked the way you describe above; for a developer this is not a problem, but for a designer the less code they have to write the better!

S.Lott ,May 10, 2011 at 10:38

This is all covered on the Python web site. , and numerous blogs cover this. . Please Google, read, and then ask specific questions based on what you've read. – S.Lott May 10 '11 at 10:38

S.Lott ,May 10, 2011 at 13:04

I thought the examples clearly demonstrated idiomatic. Since I'm unable to understand what's wrong with the examples, could you state which examples you found to be unclear? Which examples were confusing? Can you be more specific on how all those examples where not able to show idiomatic Python? – S.Lott May 10 '11 at 13:04

Giuseppe Maggiore ,May 10, 2011 at 13:09

I've read precisely those articles, and the PEP-342 leaves me somewhat confused: is it some actual extension that is currently working in Python? Is the Trampoline class shown there part of the standard libraries of the language? BTW, my question was very precise, and it was about the IDIOMATIC way to pass control around coroutines. The fact that I can read about a ton of ways to do so really does not help. Neither does your snarkiness... – Giuseppe Maggiore May 10 '11 at 13:09

Giuseppe Maggiore ,May 10, 2011 at 13:11

Idiomatic is about the "standard" way to perform some function; there is absolutely nothing wrong with iterating the results of a nested coroutine, but there are examples in the literature of programming languages where yielding automatically climbs down the call stack and so you do not need to re-yield at each caller, hence my curiosity if this pattern is covered by sintactic sugar in Python or not! – Giuseppe Maggiore May 10 '11 at 13:11

S.Lott ,May 10, 2011 at 13:19

@Giuseppe Maggiore: "programming languages where yielding automatically climbs down the call stack" That doesn't sound like the same question. Are you asking for idiomatic Python -- as shown by numerous examples -- or are you asking for some other feature that's not shown in the Python examples but is shown in other languages? I'm afraid that I can't understand your question at all. Can you please clarify what you're really looking for? – S.Lott May 10 '11 at 13:19

blubb ,May 10, 2011 at 10:37

Are you looking for something like this?
def B():
   for x in A():
     if x is None:

   # continue, x contains value A yielded

Giuseppe Maggiore ,May 10, 2011 at 12:59

Yes, that is what I am doing. My question is if this is the idiomatic way or if there is some syntactic construct that is capable of hiding this pattern which recurs very often in my application. – Giuseppe Maggiore May 10 '11 at 12:59

blubb ,May 10, 2011 at 13:31

@Guiseppe Maggiore: I'm not aware of any such constructs. However, it seems strange that you need this pattern often... I can't think of many valid used cases off the top of my head. If you give more context information, maybe we can propose an alternative solution which is more elegant over all? – blubb May 10 '11 at 13:31

Giuseppe Maggiore ,May 10, 2011 at 15:17

It's for a game where the users of the language will be the designers who write scripts (script = coroutine). Each character has an associated script, and there are many sub-scripts which are invoked by the main script; consider that, for example, run_ship invokes many times reach_closest_enemy, fight_with_closest_enemy, flee_to_allies, and so on. All these sub-scripts need to be invoked the way you describe above; for a developer this is not a problem, but for a designer the less code they have to write the better! – Giuseppe Maggiore May 10 '11 at 15:17

blubb ,May 10, 2011 at 15:57

@Guiseppe Maggiore: I'd propose you add that last comment to the question so that other get a chance of answering it, too... – blubb May 10 '11 at 15:57

Simon Radford ,Nov 11, 2011 at 0:24

Edit: I recommend using Greenlet . But if you're interested in a pure Python approach, read on.

This is addressed in PEP 342 , but it's somewhat tough to understand at first. I'll try to explain simply how it works.

First, let me sum up what I think is the problem you're really trying to solve.


You have a callstack of generator functions calling other generator functions. What you really want is to be able to yield from the generator at the top, and have the yield propagate all the way down the stack.

The problem is that Python does not ( at a language level ) support real coroutines, only generators. (But, they can be implemented.) Real coroutines allow you to halt an entire stack of function calls and switch to a different stack. Generators only allow you to halt a single function. If a generator f() wants to yield, the yield statement has to be in f(), not in another function that f() calls.

The solution that I think you're using now, is to do something like in Simon Stelling's answer (i.e. have f() call g() by yielding all of g()'s results). This is very verbose and ugly, and you're looking for syntax sugar to wrap up that pattern. Note that this essentially unwinds the stack every time you yield, and then winds it back up again afterwards.


There is a better way to solve this problem. You basically implement coroutines by running your generators on top of a "trampoline" system.

To make this work, you need to follow a couple patterns: 1. When you want to call another coroutine, yield it. 2. Instead of returning a value, yield it.


def f():
    result = g()
    return return_value


def f():
    result = yield g()
    yield return_value

Say you're in f(). The trampoline system called f(). When you yield a generator (say g()), the trampoline system calls g() on your behalf. Then when g() has finished yielding values, the trampoline system restarts f(). This means that you're not actually using the Python stack; the trampoline system manages a callstack instead.

When you yield something other than a generator, the trampoline system treats it as a return value. It passes that value back to the caller generator through the yield statement (using .send() method of generators).


This kind of system is extremely important and useful in asynchronous applications, like those using Tornado or Twisted. You can halt an entire callstack when it's blocked, go do something else, and then come back and continue execution of the first callstack where it left off.

The drawback of the above solution is that it requires you to write essentially all your functions as generators. It may be better to use an implementation of true coroutines for Python - see below.


There are several implementations of coroutines for Python, see:

Greenlet is an excellent choice. It is a Python module that modifies the CPython interpreter to allow true coroutines by swapping out the callstack.

Python 3.3 should provide syntax for delegating to a subgenerator, see PEP 380 .

gaborous ,Nov 9, 2012 at 10:04

Very useful and clear answer, thank's! However, when you say that standard Python coroutines essentially require to write all functions as generators, did you mean only first level functions or really all functions? As you said above, when yielding something other than a generator, the trampoline system still works, so theoretically we can just yield at the first-layer functions any other functions that may or may not be generators themselves. Am I right? – gaborous Nov 9 '12 at 10:04

Simon Radford ,Nov 21, 2012 at 21:37

All "functions" between the trampoline system and a yield must be written as generators. You can call regular functions normally, but then you can't effectively "yield" from that function or any functions it calls. Does that make sense / answer your question? – Simon Radford Nov 21 '12 at 21:37

Simon Radford ,Nov 21, 2012 at 21:39

I highly recommend using Greenlet - it's a true implementation of coroutines for Python, and you don't have to use any of these patterns I've described. The trampoline stuff is for people who are interested in how you can do it in pure Python. – Simon Radford Nov 21 '12 at 21:39

Nick Sweeting ,Jun 7, 2015 at 22:12

To anyone reading this in 2015 or later, the new syntax is 'yield from' ( PEP 380 ) and it allows true coroutines in Python >3.3 . – Nick Sweeting Jun 7 '15 at 22:12

[Nov 14, 2017] Masterminds of Programming Conversations with the Creators of Major Programming Languages

Notable quotes:
"... What differences are there between developing a programming language and developing a "common" software project? ..."
"... How do you debug a language? ..."
"... How do you decide when a feature should go in a library as an extension or when it needs to have support from the core language? ..."
"... I suppose there are probably features that you've looked at that you couldn't implement in Python other than by changing the language, but you probably rejected them. What criteria do you use to say this is something that's Pythonic, this is something that's not Pythonic? ..."
"... You have the "Zen of Python," but beyond that? ..."
"... Sounds almost like it's a matter of taste as much as anything ..."
"... There's an argument to make for parsimony there, but very much in the context of personal taste ..."
"... How did the Python Enhancement Proposal (PEP) process come about? ..."
"... Do you find that adding a little bit of formalism really helps crystallize the design decisions around Python enhancements? ..."
"... Do they lead to a consensus where someone can ask you to weigh in on a single particular crystallized set of expectations and proposals? ..."
"... What creates the need for a new major version? ..."
"... How did you choose to handle numbers as arbitrary precision integers (with all the cool advantages you get) instead of the old (and super common) approach to pass it to the hardware? ..."
"... Why do you call it a radical step? ..."
"... How did you adopt the "there should be one -- and preferably only one -- obvious way to do it" philosophy? ..."
"... What is your take on static versus dynamic typing? ..."
"... Are we moving toward hybrid typing? ..."
"... Why did you choose to support multiple paradigms? ..."
"... When you created the language, did you consider the type of programmers it might have attracted? ..."
"... How do you balance the different needs of a language that should be easy to learn for novices versus a language that should be powerful enough for experienced programmers to do useful things? Is that a false dichotomy? ..."
Nov 14, 2017 |

The Pythonic Way

What differences are there between developing a programming language and developing a "common" software project?

Guido van Rossum : More than with most software projects, your most important users are programmers themselves. This gives a language project a high level of "meta" content. In the dependency tree of software projects, programming

How do you debug a language?

Guido : You don't. Language design is one area where agile development methodologies just don't make sense -- until the language is stable, few people want to use it, and you won't find the bugs in the language definition until you have so many users that it's too late to change things.

Of course there's plenty in the implementation that can be debugged like any old program, but the language design itself pretty much requires careful design up front, because the cost of bugs is so exorbitant.

How do you decide when a feature should go in a library as an extension or when it needs to have support from the core language?

Guido : Historically, I've had a pretty good answer for that. One thing I noticed very early on was that everybody wants their favorite feature added to the language, and most people are relatively inexperienced about language design. Everybody is always proposing "let's add this to the language," "let's have a statement that does X." In many cases, the answer is, "Well, you can already do X or something almost like X by writing these two or three lines of code, and it's not all that difficult." You can use a dictionary, or you can combine a list and a tuple and a regular expression, or write a little metaclass -- all of those things. I may even have had the original version of this answer from Linus, who seems to have a similar philosophy.

Telling people you can already do that and here is how is a first line of defense. The second thing is, "Well, that's a useful thing and we can probably write or you can probably write your own module or class, and encapsulate that particular bit of abstraction." Then the next line of defense is, "OK, this looks so interesting and useful that we'll actually accept it as a new addition to the standard library, and it's going to be pure Python." And then, finally, there are things that just aren't easy to do in pure Python and we'll suggest or recommend how to turn them into a C extension. The C extensions are the last line of defense before we have to admit, "Well, yeah, this is so useful and you really cannot do this, so we'll have to change the language."

There are other criteria that determine whether it makes more sense to add something to the language or it makes more sense to add something to the library, because if it has to do with the semantics of namespaces or that kind of stuff, there's really nothing you can do besides changing the language. On the other hand, the extension mechanism was made powerful enough that there is an amazing amount of stuff you can do from C code that extends the library and possibly even adds new built-in functionality without actually changing the language. The parser doesn't change. The parse tree doesn't change. The documentation for the language doesn't change. All your tools still work, and yet you have added new functionality to your system.

I suppose there are probably features that you've looked at that you couldn't implement in Python other than by changing the language, but you probably rejected them. What criteria do you use to say this is something that's Pythonic, this is something that's not Pythonic?

Guido : That's much harder. That is probably, in many cases, more a matter of a gut feeling than anything else. People use the word Pythonic and "that is Pythonic" a lot, but nobody can give you a watertight definition of what it means for something to be Pythonic or un-Pythonic.

You have the "Zen of Python," but beyond that?

Guido : That requires a lot of interpretation, like every good holy book. When I see a good or a bad proposal, I can tell if it is a good or bad proposal, but it's really hard to write a set of rules that will help someone else to distinguish good language change proposals from bad change proposals.

Sounds almost like it's a matter of taste as much as anything

Guido : Well, the first thing is always try to say "no," and see if they go away or find a way to get their itch scratched without changing the language. It's remarkable how often that works. That's more of a operational definition of "it's not necessary to change the language."

If you keep the language constant, people will still find a way to do what they need to do. Beyond that it's often a matter of use cases coming from different areas where there is nothing application-specific. If something was really cool for the Web, that would not make it a good feature to add to the language. If something was really good for writing shorter functions or writing classes that are more maintainable, that might be a good thing to add to the language. It really needs to transcend application domains in general, and make things simpler or more elegant.

When you change the language, you affect everyone. There's no feature that you can hide so well that most people don't need to know about. Sooner or later, people will encounter code written by someone else that uses it, or they'll encounter some obscure corner case where they have to learn about it because things don't work the way they expected.

Often elegance is also in the eye of the beholder. We had a recent discussion on one of the Python lists where people were arguing forcefully that using dollar instead of self-dot was much more elegant. I think their definition of elegance was number of keystrokes.

There's an argument to make for parsimony there, but very much in the context of personal taste

Guido : Elegance and simplicity and generality all are things that, to a large extent, depend on personal taste, because what seems to cover a larger area for me may not cover enough for someone else, and vice versa.

How did the Python Enhancement Proposal (PEP) process come about?

Guido : That's a very interesting historical tidbit. I think it was mostly started and championed by Barry Warsaw, one of the core developers. He and I started working together in '95, and I think around 2000, he came up with the suggestion that we needed more of a formal process around language changes.

I tend to be slow in these things. I mean I wasn't the person who discovered that we really needed a mailing list. I wasn't the person who discovered that the mailing list got unwieldy and we needed a newsgroup. I wasn't the person to propose that we needed a website. I was also not the person to propose that we needed a process for discussing and inventing language changes, and making sure to avoid the occasional mistake where things had been proposed and quickly accepted without thinking through all of the consequences.

At the time between 1995 and 2000, Barry, myself, and a few other core developers, Fred Drake, Ken Manheimer for a while, were all at CNRI, and one of the things that CNRI did was organize the IETF meetings. CNRI had this little branch that eventually split off that was a conference organizing bureau, and their only customer was the IETF. They later also did the Python conferences for a while, actually. Because of that it was a pretty easy boondoggle to attend IETF meetings even if they weren't local. I certainly got a taste of the IETF process with its RFCs and its meeting groups and stages, and Barry also got a taste of that. When he proposed to do something similar for Python, that was an easy argument to make. We consciously decided that we wouldn't make it quite as heavy-handed as the IETF RFCs had become by then, because Internet standards, at least some of them, affect way more industries and people and software than a Python change, but we definitely modeled it after that. Barry is a genius at coming up with good names, so I am pretty sure that PEP was his idea.

We were one of the first open source projects at the time to have something like this, and it's been relatively widely copied. The Tcl/Tk community basically changed the title and used exactly the same defining document and process, and other projects have done similar things.

Do you find that adding a little bit of formalism really helps crystallize the design decisions around Python enhancements?

Guido : I think it became necessary as the community grew and I wasn't necessarily able to judge every proposal on its value by itself. It has really been helpful for me to let other people argue over various details, and then come with relatively clear-cut conclusions.

Do they lead to a consensus where someone can ask you to weigh in on a single particular crystallized set of expectations and proposals?

Guido : Yes. It often works in a way where I initially give a PEP a thumb's up in the sense that I say, "It looks like we have a problem here. Let's see if someone figures out what the right solution is." Often they come out with a bunch of clear conclusions on how the problem should be solved and also a bunch of open issues. Sometimes my gut feelings can help close the open issues. I'm very active in the PEP process when it's an area that I'm excited about -- if we had to add a

What creates the need for a new major version?

Guido : It depends on your definition of major. In Python, we generally consider releases like 2.4, 2.5, and 2.6 "major" events, which only happen every 18–24 months. These are the only occasions where we can introduce new features. Long ago, releases were done at the whim of the developers (me, in particular). Early this decade, however, the users requested some predictability -- they objected against features being added or changed in "minor" revisions (e.g., 1.5.2 added major features compared to 1.5.1), and they wished the major releases to be supported for a certain minimum amount of time (18 months). So now we have more or less time-based major releases: we plan the series of dates leading up to a major release (e.g., when alpha and beta versions and release candidates are issued) long in advance, based on things like release manager availability, and we urge the developers to get their changes in well in advance of the final release date.

Features selected for addition to releases are generally agreed upon by the core developers, after (sometimes long) discussions on the merits of the feature and its precise specification. This is the PEP process: Python Enhancement Proposal, a document-base process not unlike the IETF's RFC process or the Java world's JSR process, except that we aren't quite as formal, as we have a much smaller community of developers. In case of prolonged disagreement (either on the merits of a feature or on specific details), I may end up breaking a tie; my tie-breaking algorithm is mostly intuitive, since by the time it is invoked, rational argument has long gone out of the window.

The most contentious discussions are typically about user-visible language features; library additions are usually easy (as they don't harm users who don't care), and internal improvements are not really considered features, although they are constrained by pretty stringent backward compatibility at the C API level.

Since the developers are typically the most vocal users, I can't really tell whether

There's also the concept of a radically major or breakthrough version, like 3.0. Historically, 1.0 was evolutionarily close to 0.9, and 2.0 was also a relatively small step from 1.6. From now on, with the much larger user base, such versions are rare indeed, and provide the only occasion for being truly incompatible with previous versions. Major versions are made backward compatible with previous major versions with a specific mechanism available for deprecating features slated for removal.

How did you choose to handle numbers as arbitrary precision integers (with all the cool advantages you get) instead of the old (and super common) approach to pass it to the hardware?

Guido : I originally inherited this idea from Python's predecessor, ABC. ABC used arbitrary precision rationals, but I didn't like the rationals that much, so I switched to integers; for reals, Python uses the standard floating-point representation supported by the hardware (and so did ABC, with some prodding).

Originally Python had two types of integers: the customary 32-bit variety ("int") and a separate arbitrary precision variety ("long"). Many languages do this, but the arbitrary precision variety is relegated to a library, like Bignum in Java and Perl, or GNU MP for C.

Previously, this would raise an OverflowError exception. There was once a time where the result would silently be truncated, but I changed it to raising an exception before ever letting others use the language. In early 1990, I wasted an afternoon debugging a short demo program I'd written implementing an algorithm that made non-obvious use of very large integers. Such debugging sessions are seminal experiences.

However, there were still certain cases where the two number types behaved slightly different; for example, printing an int in hexadecimal or octal format would produce an unsigned outcome (e.g., –1 would be printed as FFFFFFFF), while doing the same on the mathematically equal long would produce a signed outcome (–1, in this case). In Python 3.0, we're taking the radical step of supporting only a single integer type; we're calling it int , but the implementation is largely that of the old long type.

Why do you call it a radical step?

Guido : Mostly because it's a big deviation from current practice in Python. There was a lot of discussion about this, and people proposed various alternatives where two (or more) representations would be used internally, but completely or mostly hidden from end users (but not from C extension writers). That might perform a bit better, but in the end it was already a massive amount of work, and having two representations internally would just increase the effort of getting it right, and make interfacing to it from C

How did you adopt the "there should be one -- and preferably only one -- obvious way to do it" philosophy?

Guido : This was probably subconscious at first. When Tim definition of having one way (or one true way) to express something. For example, the XYZ coordinates of any point in 3D space are uniquely determined, once you've picked an origin and three basis vectors.

I also like to think that I'm doing most users a favor by not requiring them to choose between similar alternatives. You can contrast this with Java, where if you need a listlike data structure, the standard library offers many versions (a linked list, or an array list, and others), or C, where you have to decide how to implement your own list data type.

What is your take on static versus dynamic typing?

Guido : I wish I could say something simple like "

In some situations the verbosity of Java is considered a plus; it has enabled the creation of powerful code-browsing tools that can answer questions like "where is this variable changed?" or "who calls this method?" Dynamic languages make answering such questions harder, because it's often hard to find out the type of a method argument without analyzing every path through the entire codebase. I'm not sure how functional languages like Haskell support such tools; it could well be that you'd have to use essentially the same technique as for dynamic languages, since that's what type inferencing does anyway -- in my limited understanding!

Are we moving toward hybrid typing?

Guido : I expect there's a lot to say for some kind of hybrid. I've noticed that most large systems written in a statically typed language actually contain a significant subset that is essentially dynamically typed. For example, GUI widget sets and database APIs for Java often feel like they are fighting the static typing every step of the way, moving most correctness checks to runtime.

A hybrid language with functional and dynamic aspects might be quite interesting. I should add that despite Python's support for some functional tools like map() and lambda , Python does not have a functional-language subset: there is no type inferencing, and no opportunity for parallelization.

Why did you choose to support multiple paradigms?

Guido : I didn't really; Python supports procedural programming, to some extent, and OO. These two aren't so different, and Python's procedural style is still strongly influenced by objects (since the fundamental data types are all objects). Python supports a tiny bit of functional programming -- but it doesn't resemble any real functional language, and it never will. Functional languages are all about doing as much as possible at compile time -- the "functional" aspect means that the compiler can optimize things under a very strong guarantee that there are no side effects, unless explicitly declared. Python is about having the simplest, dumbest compiler imaginable, and the official runtime semantics actively discourage cleverness in the compiler like parallelizing loops or turning recursion into loops.

Python probably has the reputation of supporting functional programming based on the inclusion of lambda , map , filter , and reduce in the language, but in my eyes these are just syntactic sugar, and not the fundamental building blocks that they are in functional languages. The more fundamental property that Python shares with Lisp (not a functional language either!) is that functions are first-class objects, and can be passed around like any other object. This, combined with nested scopes and a generally Lisp-like approach to function state, makes it possible to easily implement concepts that superficially resemble concepts from functional languages, like currying, map, and reduce. The primitive operations that are necessary to implement those concepts are built are the primitive operations. You can write reduce() in a few lines of Python. Not so in a functional language.

When you created the language, did you consider the type of programmers it might have attracted?

Guido : Yes, but I probably didn't have enough imagination. I was thinking of professional programmers in a Unix or Unix-like environment. Early versions of the Python tutorial used a slogan something like "Python bridges the gap between C and shell programming," because that was where I was myself, and the people immediately around me. It never occurred to me that Python would be a

The fact that it was useful for teaching first principles of

How do you balance the different needs of a language that should be easy to learn for novices versus a language that should be powerful enough for experienced programmers to do useful things? Is that a false dichotomy?

Guido : Balance is the word. There are some well-known traps to avoid, like stuff that is thought to help novices but annoys

[Nov 09, 2017] Conversion of Perl to Python

Nov 09, 2017 |

I think you should rewrite your code. The quality of the results of a parsing effort depends on your Perl coding style. I think the quote below sums up the theoretical side very well. From Wikipedia: Perl in Wikipedia

Perl has a Turing-complete grammar because parsing can be affected by run-time code executed during the compile phase.[25] Therefore, Perl cannot be parsed by a straight Lex/Yacc lexer/parser combination. Instead, the interpreter implements its own lexer, which coordinates with a modified GNU bison parser to resolve ambiguities in the language.

It is often said that "Only perl can parse Perl," meaning that only the Perl interpreter (perl) can parse the Perl language (Perl), but even this is not, in general, true. Because the Perl interpreter can simulate a Turing machine during its compile phase, it would need to decide the Halting Problem in order to complete parsing in every case. It's a long-standing result that the Halting Problem is undecidable, and therefore not even Perl can always parse Perl. Perl makes the unusual choice of giving the user access to its full programming power in its own compile phase. The cost in terms of theoretical purity is high, but practical inconvenience seems to be rare.

Other programs that undertake to parse Perl, such as source-code analyzers and auto-indenters, have to contend not only with ambiguous syntactic constructs but also with the undecidability of Perl parsing in the general case. Adam Kennedy's PPI project focused on parsing Perl code as a document (retaining its integrity as a document), instead of parsing Perl as executable code (which not even Perl itself can always do). It was Kennedy who first conjectured that, "parsing Perl suffers from the 'Halting Problem'."[26], and this was later proved.[27]

Starting in 5.10, you can compile perl with the experimental Misc Attribute Decoration enabled and set the PERL_XMLDUMP environment variable to a filename to get an XML dump of the parse tree (including comments - very helpful for language translators). Though as the doc says, this is a work in progress.

Looking at the PLEAC stuff, what we have here is a case of a rote translation of a technique from one language causing another to look bad. For example, its rare in Perl to work character-by-character. Why? For one, its a pain in the ass. A fair cop. For another, you can usually do it faster and easier with a regex. One can reverse the OP's statement and say "in Perl, regexes are so easy that most of the time other string manipulation is not needed". Anyhow, the OP's sentiment is correct. You do things differently in Perl than in Python so a rote translator would produce nasty code. – Schwern Apr 8 '10 at 11:47

down vote Converting would require writing a Perl parser, semantic checker, and Python code generator.

Not practical. Perl parsers are hard enough for the Perl teams to get right. You'd be better off translating Perl to Python from the Perl AST (opcodes) using the Perl Opcode or related modules.

Some notations do not map from Perl to Python without some work. Perl's closures are different, for example. So is its regex support.

In short, either convert it by hand, or use some integration modules to call Python from Perl or vice-versa

[Nov 07, 2017] Is PyCharm good - Quora

Nov 07, 2017 |

Cody Jackson , Python book author ( ) Answered Sep 11

I stumbled upon PyCharm a few years ago when my editor of choice (Stani's Python Editor) was no longer maintained. I haven't looked back.

I used the community edition for many years then decided to purchase a copy. While I don't necessarily need all the functionality of the paid version, I want to support the company in their work.

The PEP 8 notifications are nice to have. While PEP 8 is more of a guideline, it certainly helps ensure code looks nice and is easy to work with.

What's better, IMO, is the ability to load anything you want without having to explicitly download it. Import a module that isn't already on your system? PyCharm will let you know and offer to download it for you. Very handy.

I used to use GitKraken for GitHub work but the built-in VCS tools in PyCharm are just as easy to use, so I haven't bothered to download GitKraken for several months now. PyCharm highlights your modified files using color codes, so you know what you have updated, what's new, etc. so you know exactly what is going to be added in your next push. It also shows you what has changed between the different files using diff, which is handy.

PyCharm has built-in support for many different frameworks, the paid version obviously having more support. However, the free version includes Django, HTML, CSS, and JavaScript, which is sufficient for most people.

While the paid version has changed from a perpetual licenses to a subscription model, the monthly cost is only $8 per month for an individual, with certain discounts available.

Overall, PyCharm is the best proprietary Python editor and, unless you prefer completely FOSS software, there is no reason not to use it.

Yosef Dishinger , I dream in Python Answered Sep 28

The other answers have already said most of it, but I would just add that the search and code discovery features of PyCharm are superior to anything else I've used.

I work on a pretty large codebase, and with PyCharm you can search throughout the entire project, or even multiple projects, for a given string. Now it's true that other editors also have this feature, but PyCharm adds something here that other editors don't.

It lets you edit the code where the reference was found, in a panel within the search results window, and simply go through each search result one by one and see and modify the code in each section as you go, without needing to open the different files on their own.

At times when I've needed to do major refactoring this has been a lifesaver. It increased my productivity dramatically.

There are a lot of really nice editors out there, but I haven't come across anything like PyCharm for taming large codebases.

Edward Moseley , Python for programming, R for stats, C/C++ for microcontrollers Answered Aug 27 2016

I'm very much in agreement with User-9321510923064044481

If you begin to use a library that you don't have installed, PyCharm will let you know and makes the installation process really seamless. RStudio could actually probably take a page out of PyCharm's playbook, there.

I use the integrated python console very frequently for prototyping.

There's also this "Tip of the day" popup that I always mean to shut off but well sometimes they are good tips.

This may be nit-picky, but I especially agree that I don't use the integrated VCS , and until they find a more elegant way to integrate it I will stick to git on my command line.

[Nov 07, 2017] How to use Python interactively in PyCharm

Nov 07, 2017 |

Tony Flury , Freelance s/w developer Answered Apr 2

PyCharn when it starts will also start a python terminal as part of the project window. Look along the bottom where you will have tabs such as console and terminal.

PyCharn also offers integration with Jupiter notebook, but I haven't tried to use that feature yet.

Zdenko Hrcek , enjoying programming in Python Answered Apr 2

In main menu under Tools there is "Python console" option

Related Questions More Answers Below

[Nov 07, 2017] Should I use PyCharm for programming Python

Nov 07, 2017 |

AP Rajshekhar , Knows Java, Python, Ruby, Go; dabbled in Qt and GTK# Answered Sep 24, 2016

As with any other language, one does not need an IDE, which PyCharm is. However, it has been my experience that having an IDE improves productivity. Same is true with PyCharm.

If you are developing small applications that does not need git integration or PEP8 standards conformation, then you don't need PyCharm However, if you need any of the above, and do not want to use multiple tools (flake8, git-cli/git-cola) manually, then PyCharm is a good choice as it provides the following, apart from autocomplete, from within the IDE:

So, Pycharm improves your productivity quite a bit. Dominic Christoph , Met cofounders at a local meetup Updated Apr 5

It's obviously not necessary, and there are other free editors and IDEs. But in my experience, it is the best option.

I've used both Vim and Emacs and played with Sublime and Atom a bit. Those four editors allow you to highly customize your programming environment. Which some feel is a necessity.

They're all great, but you will miss out on some features that no one (that I know of; if you do, please share) has been able to properly recreate in a regular editor. Mainly, intelligent code navigation and completion. These are the most useful features that I've used, and PyCharm does them **almost** perfectly.

You'll spend much more time navigating code than you will typing code, so it's very helpful to be able to hit a keyboard shortcut and jump to a variable or method's definition/declaration. When you are typing, the intelligent autocomplete will be a big help as well. It's much more useable than the completion engines in editors because it only provides completions which are in scope. There're also Ctags and Gtags available for text editors but they are harder to use, must be customized for every language, and with any medium to large sized project work poorly. Though YMMV.

When it comes down to it, I prefer having features that work really well than the ability to customize. Download the community edition and see for yourself if it works for you. Especially for a beginner, it will save you the time of learning tools, which isn't as important as learning the language, because the UI is self-explanatory.


I would find it unusable without the IdeaVim plugin. The keybindings of Vim are just too good to give up.

I should also mention that Jetbrains IDEs are very customizable themselves. The IdeaVim plugin even has a dotfile.

You'll also find videos on YouTube where programmers try to discourage others from using them because of the distracting number of panes. Though it has a distraction free mode and even without that, if you use it sensibly, you can have it only display the editor and tabs. Pandu Poluan , programmed in Python for nearly a year, to replace complex bash scripts. Answered Mar 24

You don't *have* to use PyCharm, but its features are so good *I* find it essential for Python development.

Things I can't live without:

There are many more PyCharm features, but all the above make PyCharm for me a must-have for Python development.

[Nov 07, 2017] Customer reviews Python Cookbook, Third edition

Notable quotes:
"... There are a couple of quick final points to make about the Python cookbook. Firstly it uses Python 3 and as many very useful third party modules haven't been ported from Python 2.X over to Python 3 yet Python 2.X is still probably still more widely used. ..."
"... this is a language learning book it's not aimed at the novice programmer, think of it more as language nuances and inflections for the experienced Pythonista rather than a how to learn Python book and you won't go far wrong. ..."
"... Most examples are self contained and all the code examples that I tried worked. Additionally, there is a GitHub that the authors created which provides all the code for the examples if you do not want type it yourself. The examples themselves were applied to real world problems; I could see how the recipe was used clearly. When the authors felt they could not provide an entire solution in the text, they point the correct place to visit online. ..."
"... But that's only the beginning. It's hard to describe the pleasure of reading some of the solutions in the Iterators and Generators section, for instance. Actually, I take that back. The pleasure is the same kind as what you may have felt when you first came upon ideas in books such as Bentley's Programming Pearls, way back when. ..."
"... The Active State repository of Python recipes includes many gems, but as the Authors observe in their preference: "most of these recipes are steeped in history and the past". ..."
Nov 07, 2017 |

renaissance geek on June 23, 2013

The Python Domestic Science Textbook?

A few years ago now I was working in a job that required me to code in PERL. My PERL is passable but no better than that so when I found a copy of the PERL cookbook it was something of a life saver and constant companion. The PERL cookbook is deeply pragmatic and addresses real world problems with the language almost as an afterthought. (Which now I think about is actually a pretty good description of PERL anyway!) The Python cookbook is a very different beast and is much more an exercise in learning the intricacies and nuances of the language. I'm not sure cookbook is the right title - if the PERL Cookbook is a cookbook then the Python Cookbook is more of a domestic science textbook. A bit deeper, a bit dryer and not so focused on immediate problems. This is no way meant to imply that it's a bad book, on the contrary it's a very good book just not entirely what I was expecting.

The book itself is divided into fifteen large sections covering the likes of data structures and algorithms; functions; metaprogramming and concurrency with each section consisting of a number of problems. The problems are structured as a definition of the problem, a solution and a discussion of the solution and how it can be extended. Due to the nature of the Python language a large part of solving the problems lies in knowing which module(s) to include in your code so each problem is generally only a couple of pages, but that is certainly enough to give the solution and reasonably detailed discussion.

As with all books of this type there is going to be some complaints of why is X included and not Y and to be honest if you tried to cover all the possible problems a practicing python programmer is likely to run across the book would end up so large as to be unusable. That being said there was, for me at least, one glaring omission.

I do a lot of data processing with reasonably large data sets, and with the buzz around big data I'm sure I'm not the only one, and frequently find that I have to break down the data sets or I simply consume all the system resources and the program exits. I would have expected at least some treatment of working with very large data sets which seems to be entirely missing.

However this is an issue based on what I use Python for and may very well not matter to you. Even though there may not be exactly the solution you are looking for, there are 260 problems and solutions in the Python cookbook so if you don't learn something new you are probably a certified Python genius and beyond manuals anyway.

There are a couple of quick final points to make about the Python cookbook. Firstly it uses Python 3 and as many very useful third party modules haven't been ported from Python 2.X over to Python 3 yet Python 2.X is still probably still more widely used.

Secondly although this is a language learning book it's not aimed at the novice programmer, think of it more as language nuances and inflections for the experienced Pythonista rather than a how to learn Python book and you won't go far wrong.

Bluegeek on June 13, 2013
Review: "Python Cookbook" by David Beazley and Brian K. Jones; O'Reilly Media

The "Python Cookbook" is a book that brings the Python scripting language to O'Reilly's popular "Cookbook" format. Each Cookbook provides a series of "Recipes" that teach users common techniques that can be used to become productive quickly and as a reference to those who might've forgotten how to do something.

I reviewed this book in the Mobi e-book format. Reading it on Kindle for PC, the Table of Contents only shows the major sections rather than the individual recipes and this made it harder to find what I was looking for. This is apparently a limitation of Kindle for PC, since my Kindle 3 and Kindle for Android had no such issue.

When I use an O'Reilly "Cookbook", I judge it according to its' usefulness: Can I become productive quickly? Is it easy to find what I need? Does it provide helpful tips? Does it teach me where to find the answers to my questions?

This book is not targeted at new Python programmers, but that's where I'm at. The best way for me to learn a new scripting language is to dive right in and try to write something useful, and that was my goal for the "Python Cookbook". I also had "Learning Python" handy to cover any of the basics.

My first Python script was written to read in lists of subnets from two separate files and check that every subnet in list B was also in list A.

I used Recipe 13.3 to parse the command line options. Recipe 5.1 showed me how to read and write files. Recipe 2.11 taught me how to strip carriage returns out of my lines. Recipe 1.10, "Removing Duplicates from a Sequence while Maintaining Order", was very helpful and I was able to reuse the code in my own script. Recipe 2.14, "Combining and Concatenating Strings", helped me with my print statements. Considering this was the first Python script I ever wrote and that it ran, I consider both it and the "Python Cookbook" a success.

I had a bit more trouble with my second script. I was trying to write a script to find the subnet address given an interface address in CIDR notation. Recipe 11.4 introduced the ipaddress module, but this module refused to accept a string variable containing the interface in CIDR notation. I ended up installing another module (netaddr) I found via Google and things went better after that. I suspect the problem was that I was using ActivePython [64 bit] and this book was written for Python 3.

As a DNS professional I was disappointed that there were no DNS-related recipes in the Network and Web Programming section, but Web-related topics were well-represented in the book.

The "Python Cookbook" doesn't seem to have quite the depth and organization of the "Perl Cookbook" but I'm sure I will rely on it heavily as I learn to use Python. It did allow me to be productive very quickly and it passes the "Cookbook" standard with flying colors. Any book that can get me to the point of writing a working, useful script in less than a day is worth using. I recommend this book to anyone who has a basic understanding of Python and wants to get past "Hello, World" and "Eat Spam" as fast as possible.

Reviewer's Note: I received a free copy of the "Python Cookbook" which was used to write this review.

William P Ross Enthusiast: Architecture on May 6, 2016
Treasure Trove of Python Recipes

Python Cookbook goes in depth on a variety of different Python topics. Each section is similar to a question that might be asked on Stack Overflow. The recipes range in difficulty from easy to advanced metaprogramming.

One particular recipe that I liked was 9.1 on how to time a function. When I am using Python I often need to time the code, and usually I need to look up how to do it. This example created a decorator function for timing. It makes it so that you can just put @timethis on top of a function and see how long it takes to execute. I appreciated how elegant this solution was as opposed to the way I was implementing it.

Most examples are self contained and all the code examples that I tried worked. Additionally, there is a GitHub that the authors created which provides all the code for the examples if you do not want type it yourself. The examples themselves were applied to real world problems; I could see how the recipe was used clearly. When the authors felt they could not provide an entire solution in the text, they point the correct place to visit online.

The range in topics was impressive. I found the most challenging chapters to be 9, 12, and 15 which were on metaprogramming, concurrency, and C Extensions. At the beginning of the book the recipes cover topics you would expect like data structures and algorithms, strings, and generators. I found myself surprised that I had not seen a lot of the techniques and solutions before. They were well crafted solutions, and I appreciated how much time and detail the authors must have spent to gather the information.

This is a great reference to have by your side when programming in Python.

Groundhog Day on June 30, 2015
Programming Pearls... Reloaded

Having read some humdrum works in the Cookbook series, my expectations were not very high. However, I soon discovered that this book is in a different league.

When he discusses a problem, Beazley gives you his favorite solution. He also presents alternatives, discusses pros and cons, and calls your attention to subtle details in the solution --- leaving you with a feeling of having learned something of value.

But that's only the beginning. It's hard to describe the pleasure of reading some of the solutions in the Iterators and Generators section, for instance. Actually, I take that back. The pleasure is the same kind as what you may have felt when you first came upon ideas in books such as Bentley's Programming Pearls, way back when.

I hadn't felt that excited about a programming book in a long time. This is one you can take along with you on a weekend just for the pleasure of sipping from it. Sad to say, but there are many O'Reilly books I feel like passing on soon after acquiring them. This one will have a special place on the shelves.

Devendra on September 1, 2013
Extensive tome of recipes for the Python 3 programmer

Python Cookbook is an extensive tome of recipes for the Python 3 programmer. It is a perfect companion book for those migrating Python 2 code to Python 3. If you are stuck with Python 2, you may still find the second edition of the book for sale, but the recipes may be dated as they cover Python 2.4. It is not a beginners book. If you are looking for a beginners book, I recommend Learning Python by Mark Lutz.

A quick chapter summary follows.

I've added this book to my list of references to look into, before heading to Google. Source code listings use syntax highlighting, a nice touch that makes the code easier, and less boring, to read.

I thank O'Reilly media for providing the book for review.

Dan on July 23, 2013
Wisdom - not just examples. Best viewed on a larger screen

The Active State repository of Python recipes includes many gems, but as the Authors observe in their preference: "most of these recipes are steeped in history and the past".

I'd add that the signal to noise ratio seems to be decreasing. The most prolific contributors (with the exception of Raymond Hettinger) have posted trivial examples rather than recipes. This book includes some simple examples too, but it's always in the context of a larger message. Excellent content and advice without the chaff.

I just bought this today. Unlike some early technical Kindle books I've purchased, the formatting is excellent. Kudos to the authors and publisher.

... ... ...

A. Zubarev on September 17, 2013
A book to read and come back again and again

I am tempted to state right away that this book is one of these rare "gems"! Absolutely worth every penny spent and perhaps even more in a way of getting more done in less time or even just can be used to advance professionally. So big thank you to Alex Martelli and David Ascher! I can't imagine how much time, energy, insight and effort the authors put into this book, but it is sure one of the longest professional books I have ever read.

Like I said, this book is very comprehensive at 608 pages long and touches most, if not all, aspects a typical IT pro would deal with in his or her professional life. It may appear though very dry, and in my opinion it should be, but it is the book to come back to again and again, time after time, year after year, so if you need a single specific recipe, you will not feel the book is very short thanks to the way it is structured.

I happen to actually use this book to cope with several assignments at work involving some medium to high complexity data processing for reporting purposes, thus more than a few recipes were used.

Namely, these were "Strings and Text" Ch. 2, "Numbers, Dates and Times" Ch. 3, "Files and I/O" Ch. 4, then hopped to "Functions" Ch. 7, which followed by "Parsing, Modifying and Rewriting XML" Ch. 6.6 and finally landed on "Integrating with a Relational Database" Ch. 6.8. I wish though chapter 7 "Functions" would precede most others because I think it belongs right after "Iterators and generators" which I needed to use as I expanded my program.

I must tell each did its magic, after all Python excels on processing text!

... ... ...

[Nov 07, 2017] Dive Into Python

Nov 07, 2017 |

July 28, 2002

Dive Into Python is a free Python book for experienced programmers. It was originally hosted at, but the author has pulled down all copies. It is being mirrored here. You can read the book online, or download it in a variety of formats. It is also available in multiple languages . Read Dive Into Python

This book is still being written. You can read the revision history to see what's new. Updated 20 May 2004 . Email me if you'd like to see something changed/updated, or suggestions for this site. Download Dive Into Python

Dive Into Python in your language

Translations are freely permitted as long as they are released under the GNU Free Documentation License. Dive Into Python has already been fully or partially translated into several languages. If you translate it into another language and would like to be listed here, just let me know .

Republish Dive Into Python

Want to mirror this web site? Publish this book on your corporate intranet? Distribute it on CD-ROM ? Feel free. This book is published under the GNU Free Documentation License, which gives you enormous freedoms to modify and redistribute it in all its forms. If you're familiar with the GNU General Public License for software, you already understand these freedoms; the FDL is the GPL for books. You can read the license for all the details.

Copyright © 2000, 2001, 2002, 2003, 2004 Mark Pilgrim
Download Python

Learn Python

[Nov 07, 2017] pdb the Python Debugger

Sep 03, 2017 |
26.2. pdb ! The Python Debugger

Source code: Lib/

The module pdb defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control.

The debugger is extensible ! it is actually defined as the class Pdb . This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb and cmd .

The debugger's prompt is (Pdb) . Typical usage to run a program under control of the debugger is:

>>> import pdb

>>> import mymodule


> <string>(0)?()

(Pdb) continue

> <string>(1)?()

(Pdb) continue

NameError: 'spam'

> <string>(1)?()

(Pdb) can also be invoked as a script to debug other scripts. For example:

python -m pdb

When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb's state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program's exit.

New in version 2.4: Restarting post-mortem behavior added.

The typical usage to break into the debugger from a running program is to insert

import pdb; pdb.set_trace()

at the location you want to break into the debugger. You can then step through the code following this statement, and continue running without the debugger using the command.

The typical usage to inspect a crashed program is:

>>> import pdb

>>> import mymodule

>>> mymodule.test()

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

  File "./", line 4, in test


  File "./", line 3, in test2

    print spam

NameError: spam


> ./

-> print spam


The module defines the following functions; each enters the debugger in a slightly different way:

pdb. run statement , globals locals ]]
Execute the statement (given as a string) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type continue , or you can step through the statement using step or next (all these commands are explained below). The optional globals and locals arguments specify the environment in which the code is executed; by default the dictionary of the module __main__ is used. (See the explanation of the exec statement or the eval() built-in function.)
pdb. runeval expression , globals locals ]]
Evaluate the expression (given as a string) under debugger control. When runeval() returns, it returns the value of the expression. Otherwise this function is similar to run() .
pdb. runcall function , argument ,
Call the function (a function or method object, not a string) with the given arguments. When runcall() returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered.
pdb. set_trace ()
Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails).
pdb. post_mortem traceback
Enter post-mortem debugging of the given traceback object. If no traceback is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used).
pdb. pm ()
Enter post-mortem debugging of the traceback found in sys.last_traceback .

The run* functions and set_trace() are aliases for instantiating the Pdb class and calling the method of the same name. If you want to access further features, you have to do this yourself:

class pdb. Pdb completekey='tab' , stdin=None , stdout=None , skip=None
Pdb is the debugger class.

The completekey , stdin and stdout arguments are passed to the underlying cmd.Cmd class; see the description there.

The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. [1]

Example call to enable tracing with skip :

import pdb; pdb.Pdb(skip=['django.*']).set_trace()

New in version 2.7: The skip argument.
run statement , globals locals ]]
runeval expression , globals locals ]]
runcall function , argument ,
set_trace ()
See the documentation for the functions explained above.
26.3. Debugger Commands

The debugger recognizes the following commands. Most commands can be abbreviated to one or two letters; e.g. h(elp) means that either or help can be used to enter the help command (but not he or hel , nor or Help or HELP ). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ( [] ) in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar ( ).

Entering a blank line repeats the last command entered. Exception: if the last command was a list command, the next 11 lines are listed.

Commands that the debugger doesn't recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point ( ). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger's state is not changed.

Multiple commands may be entered on a single line, separated by ;; . (A single is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ;; pair, even if it is in the middle of a quoted string.

The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination.

If a file .pdbrc exists in the user's home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file.

h(elp) [ command ]
Without argument, print the list of available commands. With a command as argument, print help about that command. help pdb displays the full documentation file; if the environment variable PAGER is defined, the file is piped through that command instead. Since the command argument must be an identifier, help exec must be entered to get help on the command.
Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands.
Move the current frame one level down in the stack trace (to a newer frame).
Move the current frame one level up in the stack trace (to an older frame).
b(reak) [[ filename :] lineno | function [, condition ]]

With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn't been loaded yet). The file is searched on sys.path . Note that each breakpoint is assigned a number to which all the other breakpoint commands refer.

If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored.

Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any.

tbreak [[ filename :] lineno | function [, condition ]]
Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break.
cl(ear) [ filename:lineno | bpnumber [ bpnumber ]]
With a filename:lineno argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation).
disable [ bpnumber [ bpnumber ]]
Disables the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled.
enable [ bpnumber [ bpnumber ]]
Enables the breakpoints specified.
ignore bpnumber [ count ]
Sets the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true.
condition bpnumber [ condition ]
Condition is an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional.
commands [ bpnumber ]

Specify a list of commands for breakpoint number bpnumber . The commands themselves appear on the following lines. Type a line containing just 'end' to terminate the commands. An example:

(Pdb) commands 1

(com) print some_variable

(com) end


To remove all commands from a breakpoint, type commands and follow it immediately with end; that is, give no commands.

With no bpnumber argument, commands refers to the last breakpoint set.

You can use breakpoint commands to start your program up again. Simply use the continue command, or step, or any other command that resumes execution.

Specifying any command resuming execution (currently continue, step, next, return, jump, quit and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint!which could have its own command list, leading to ambiguities about which list to execute.

If you use the 'silent' command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached.

New in version 2.5.
Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function).
Continue execution until the next line in the current function is reached or it returns. (The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.)

Continue execution until the line with the line number greater than the current one is reached or when returning from current frame.

New in version 2.6.
Continue execution until the current function returns.
Continue execution, only stop when a breakpoint is encountered.
j(ump) lineno

Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don't want to run.

It should be noted that not all jumps are allowed ! for instance it is not possible to jump into the middle of a for loop or out of a finally clause.

l(ist) [ first [, last ]]
List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count.
Print the argument list of the current function.
p expression

Evaluate the expression in the current context and print its value.


print can also be used, but is not a debugger command ! this executes the Python print statement.

pp expression
Like the command, except the value of the expression is pretty-printed using the pprint module.
alias [ name [command]]

Creates an alias called name that executes command . The command must not be enclosed in quotes. Replaceable parameters can be indicated by %1 , %2 , and so on, while %* is replaced by all the parameters. If no command is given, the current alias for name is shown. If no arguments are given, all aliases are listed.

Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands can be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone.

As an example, here are two useful aliases (especially when placed in the .pdbrc file):

#Print instance variables (usage "pi classInst")

alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]

#Print instance variables in self

alias ps pi self

unalias name
Deletes the specified alias.
[!] statement

Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a global command on the same line, e.g.:

(Pdb) global list_options; list_options = ['-l']


run [ args ]

Restart the debugged Python program. If an argument is supplied, it is split with "shlex" and the result is used as the new sys.argv. History, breakpoints, actions and debugger options are preserved. "restart" is an alias for "run".

New in version 2.6.
Quit from the debugger. The program being executed is aborted.


[1] Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals.

[Nov 06, 2017] Dive Deep Into Python Vs Perl Debate - What Should I Learn Python or Perl

Nov 06, 2017 |

2. Perl's Built-in Vs Python's 3rd Party Regex and OS Operations Support

Perl language borrows its syntax from and other UNIX commands like sed awk etc. due to which it has way powerful and built in regex support without importing any third-party modules.

Also, Perl can handle OS operations using built-in functions. On the other hand Python has third-party libraries for both the operations i.e. re for regex and os, sys for os operations which need to be ensured before doing such operations.

Perl's regex operations have ' sed ' like syntax which makes it easy not only for search operations but also replace, substitute and other operations on string can be done easily and swiftly than python where a person needs to know and remember the functions which cater to the need.

Example: Consider a program to search for digit in the string in Perl and Python

Import re
str = 'hello0909there'
result = re.findall('\d+',str)
print result
$string =  'hello0909there';
$string =~ m/(\d+)/;
print "$& \n"

You see the syntax for Perl is way easy and inspired by sed command which takes advantage over Python's syntax which imports third party module 're'

  1. Dominix says: September 26, 2016 at 1:52 pm

    Python centric bullshit

  2. J Cleaver says: September 13, 2016 at 4:40 am

    Some of these perl examples don't really reflect what's more-or-less standard in the Perl community at any time since Perl 5 came out (15 years ago).

    Keeping in mind the vision of TMTOWTDI, your second Perl example:

    open(FILE,"%lt;inp.txt") or die "Can't open file";
    while() {
    print "$_"; }

    really would be typically written as just:

    open (FILE, "inp.txt") or die "Can't open file: $!";
    print while ();

    As many others have pointed out, Perl has a huge amount of syntax flexibility despite its overtones of C heritage, and that allows people to write working code in a relatively ugly, inefficient, and/or hard-to-read manner, with syntax reflecting their experience with other languages.

    It's not really a drawback that Perl is so expressive, but it does mean that the programmer should be as disciplined as the task warrants when writing it when it comes to understandable Perl idioms.

  1. David G. Miller says: September 12, 2016 at 3:04 am

    1) I've usually found that the clarity and elegance of a program have a lot more to do with the programmer than the programming language. People who develop clean solutions will do so regardless of the language of implementation. Likewise, those who can't program will find a way to force an ugly solution out of any language.

    2) Most systems administrators aren't programmers and have rarely had any formal training in software development.

    Put these two observations together and you will still get ugly, "write only" programs Before perl it was shell script, yesterday it was perl, today it's Python. Tomorrow someone will be asking for a replacement for Python because it's so hard to read and can't be maintained. Get used to it (but don't blame the programming language).

    I started my perl programming with perl 2.0 in 1993. It's still my "go to" programming language since it doesn't get in my way and I can get to a solution much faster than with C or shell script.

    • Joe Chakra says: September 12, 2016 at 3:18 am

      Actually performance does matter even for scripting. Imaginew filtering a 100 MB debug log. You could use AWK or gawk, sed or grep but Perl gives a lot more flexibility. Taking 5 seconds is very different to taking ten seconds, just because the more time between request and response the more likely you are to get distracted.

  1. D. B. Dweeb says: September 8, 2016 at 1:30 am

    The Pythonic file handling below surpasses the Perl example, the exception text and file close is automatic. Advantage Python!

    with open("data.csv") as f:
    for line in f:
    print line,

[Nov 06, 2017] Indentation Error in Python - Stack Overflow

Nov 06, 2017 |
I can't compile because of this part in my code:
    if command == 'HOWMANY':
        opcodegroupr = "A0"
        opcoder = "85"
    elif command == 'IDENTIFY':
        opcodegroupr = "A0"
        opcoder = "81"

I have this error:

Sorry: IndentationError: ('unindent does not match any outer indentation level', ('', 1016, 30, "\t\telif command == 'IDENTIFY':\n"))

But I don't see any indentation error. What can be the problem?

Martijn Pieters ,Feb 20, 2013 at 11:54

You are mixing tabs and spaces.

Find the exact location with:

python -tt

and replace all tabs with spaces. You really want to configure your text editor to only insert spaces for tabs as well.

poke ,Feb 20, 2013 at 11:55

Or the other way around (depends on your personal preference) – poke Feb 20 '13 at 11:55

poke ,Feb 20, 2013 at 12:02

@MartijnPieters If you use tabs, you have tabs, so you do not need to care about its visual presentation. You should never mix tabs and spaces, but apart from that, just choose one and stick to it . You are right, it's a never-ending debate; it totally depends on your personal preference -- hence my comment. – poke Feb 20 '13 at 12:02

neil ,Feb 20, 2013 at 12:02

I have never understood why you would want to use spaces instead of tabs - 1 tab is 1 level of indent and then the size of that is a display preference - but it seems the world disagrees with me. – neil Feb 20 '13 at 12:02

Martijn Pieters ♦ ,Feb 20, 2013 at 12:13

@poke: That's very nice, but in any decent-sized project you will not be the only developer. As soon as you have two people together, there is a large chance you'll disagree about tab size. And pretending that noone will ever make the mistake of mixing tabs and spaces is sticking your head in the sand, frankly. There is a reason that every major style guide for OSS (python or otherwise) states you need to use spaces only . :-) – Martijn Pieters ♦ Feb 20 '13 at 12:13

geoffspear ,Feb 20, 2013 at 12:22

There should be one, and preferably only one, obvious way to do it. Following the style of the python codebase itself is obvious. – geoffspear Feb 20 '13 at 12:22

[Nov 06, 2017] Python Myths about Indentation

Nov 06, 2017 |

Python: Myths about Indentation

Note: Lines beginning with " >>> " and " ... " indicate input to Python (these are the default prompts of the interactive interpreter). Everything else is output from Python.

There are quite some prejudices and myths about Python's indentation rules among people who don't really know Python. I'll try to address a few of these concerns on this page.

"Whitespace is significant in Python source code."

No, not in general. Only the indentation level of your statements is significant (i.e. the whitespace at the very left of your statements). Everywhere else, whitespace is not significant and can be used as you like, just like in any other language. You can also insert empty lines that contain nothing (or only arbitrary whitespace) anywhere.

Also, the exact amount of indentation doesn't matter at all, but only the relative indentation of nested blocks (relative to each other).

Furthermore, the indentation level is ignored when you use explicit or implicit continuation lines. For example, you can split a list across multiple lines, and the indentation is completely insignificant. So, if you want, you can do things like this:

>>> foo = [
... 'some string',
... 'another string',
... 'short string'
... ]
>>> print foo
['some string', 'another string', 'short string']

>>> bar = 'this is ' \
... 'one long string ' \
... 'that is split ' \
... 'across multiple lines'
>>> print bar
this is one long string that is split across multiple lines

"Python forces me to use a certain indentation style."

Yes and no. First of all, you can write the inner block all on one line if you like, therefore not having to care about intendation at all. The following three versions of an "if" statement are all valid and do exactly the same thing (output omitted for brevity):

>>> if 1 + 1 == 2:
... print "foo"
... print "bar"
... x = 42

>>> if 1 + 1 == 2:
... print "foo"; print "bar"; x = 42

>>> if 1 + 1 == 2: print "foo"; print "bar"; x = 42

Of course, most of the time you will want to write the blocks in separate lines (like the first version above), but sometimes you have a bunch of similar "if" statements which can be conveniently written on one line each.

If you decide to write the block on separate lines, then yes, Python forces you to obey its indentation rules, which simply means: The enclosed block (that's two "print" statements and one assignment in the above example) have to be indented more than the "if" statement itself. That's it. And frankly, would you really want to indent it in any other way? I don't think so.

So the conclusion is: Python forces you to use indentation that you would have used anyway, unless you wanted to obfuscate the structure of the program. In other words: Python does not allow to obfuscate the structure of a program by using bogus indentations. In my opinion, that's a very good thing.

Have you ever seen code like this in C or C++?

/* Warning: bogus C code! */

if (some condition)
if (another condition)

Either the indentation is wrong, or the program is buggy, because an "else" always applies to the nearest "if", unless you use braces. This is an essential problem in C and C++. Of course, you could resort to always use braces, no matter what, but that's tiresome and bloats the source code, and it doesn't prevent you from accidentally obfuscating the code by still having the wrong indentation. (And that's just a very simple example. In practice, C code can be much more complex.)

In Python, the above problems can never occur, because indentation levels and logical block structure are always consistent. The program always does what you expect when you look at the indentation.

Quoting the famous book writer Bruce Eckel:

Because blocks are denoted by indentation in Python, indentation is uniform in Python programs. And indentation is meaningful to us as readers. So because we have consistent code formatting, I can read somebody else's code and I'm not constantly tripping over, "Oh, I see. They're putting their curly braces here or there." I don't have to think about that.

"You cannot safely mix tabs and spaces in Python."

That's right, and you don't want that. To be exact, you cannot safely mix tabs and spaces in C either: While it doesn't make a difference to the compiler, it can make a big difference to humans looking at the code. If you move a piece of C source to an editor with different tabstops, it will all look wrong (and possibly behave differently than it looks at first sight). You can easily introduce well-hidden bugs in code that has been mangled that way. That's why mixing tabs and spaces in C isn't really "safe" either. Also see the "bogus C code" example above.

Therefore, it is generally a good idea not to mix tabs and spaces for indentation. If you use tabs only or spaces only, you're fine.

Furthermore, it can be a good idea to avoid tabs alltogether, because the semantics of tabs are not very well-defined in the computer world, and they can be displayed completely differently on different types of systems and editors. Also, tabs often get destroyed or wrongly converted during copy&paste operations, or when a piece of source code is inserted into a web page or other kind of markup code.

Most good editors support transparent translation of tabs, automatic indent and dedent. That is, when you press the tab key, the editor will insert enough spaces (not actual tab characters!) to get you to the next position which is a multiple of eight (or four, or whatever you prefer), and some other key (usually Backspace) will get you back to the previous indentation level.

In other words, it's behaving like you would expect a tab key to do, but still maintaining portability by using spaces in the file only. This is convenient and safe.

Having said that -- If you know what you're doing, you can of course use tabs and spaces to your liking, and then use tools like "expand" (on UNIX machines, for example) before giving the source to others. If you use tab characters, Python assumes that tab stops are eight positions apart.

"I just don't like it."

That's perfectly OK; you're free to dislike it (and you're probably not alone). Granted, the fact that indentation is used to indicate the block structure might be regarded as uncommon and requiring to get used to it, but it does have a lot of advantages, and you get used to it very quickly when you seriously start programming in Python.

Having said that, you can use keywords to indicate the end of a block (instead of indentation), such as " endif ". These are not really Python keywords, but there is a tool that comes with Python which converts code using "end" keywords to correct indentation and removes those keywords. It can be used as a pre-processor to the Python compiler. However, no real Python programmer uses it, of course.
[Update] It seems this tool has been removed from recent versions of Python. Probably because nobody really used it.

"How does the compiler parse the indentation?"

The parsing is well-defined and quite simple. Basically, changes to the indentation level are inserted as tokens into the token stream.

The lexical analyzer (tokenizer) uses a stack to store indentation levels. At the beginning, the stack contains just the value 0, which is the leftmost position. Whenever a nested block begins, the new indentation level is pushed on the stack, and an "INDENT" token is inserted into the token stream which is passed to the parser. There can never be more than one "INDENT" token in a row.

When a line is encountered with a smaller indentation level, values are popped from the stack until a value is on top which is equal to the new indentation level (if none is found, a syntax error occurs). For each value popped, a "DEDENT" token is generated. Obviously, there can be multiple "DEDENT" tokens in a row.

At the end of the source code, "DEDENT" tokens are generated for each indentation level left on the stack, until just the 0 is left.

Look at the following piece of sample code:

>>> if foo:
... if bar:
... x = 42
... else:
... print foo
In the following table, you can see the tokens produced on the left, and the indentation stack on the right.
<if> <foo> <:> [0]
<INDENT> <if> <bar> <:> [0, 4]
<INDENT> <x> <=> <42> [0, 4, 8]
<DEDENT> <DEDENT> <else> <:> [0]
<INDENT> <print> <foo> [0, 2]
<DEDENT> [0]
Note that after the lexical analysis (before parsing starts), there is no whitespace left in the list of tokens (except possibly within string literals, of course). In other words, the indentation is handled by the lexer, not by the parser.

The parser then simply handles the "INDENT" and "DEDENT" tokens as block delimiters -- exactly like curly braces are handled by a C compiler.

The above example is intentionally simple. There are more things to it, such as continuation lines. They are well-defined, too, and you can read about them in the Python Language Reference if you're interested, which includes a complete formal grammar of the language.

[Nov 06, 2017] Modular Programming with Python by Erik Westra

Nov 06, 2017 |
  • Paperback: 246 pages
  • Publisher: Packt Publishing - ebooks Account (May 26, 2016)
  • Language: English
  • ISBN-10: 1785884484
  • ISBN-13: 978-1785884481
  • Product Dimensions: 7.5 x 0.6 x 9.2 inches


Modular Programming with Python

1. Introducing Modular Programming

2. Writing Your First Modular Program

3. Using Modules and Packages

4. Using Modules for Real-World Programming

5. Working with Module Patterns

6. Creating Reusable Modules

7. Advanced Module Techniques

8. Testing and Deploying Modules

9. Modular Programming as a Foundation for Good Programming Technique

By kievite on November 5, 2017

Great book on a very important topic. Highly recommended

Great book on a very important topic.

Python is complex language with even more complex environment and its module system is the critical part of it. For example Python standard library is structured as a collection of modules. The author gives you an excellent overview of Python module system, gives important recommendation about creation of your own modules (and provide several examples, including generator example in chapter 4, as well as warns about gotchas. Interplay between modules and namespaces covered in Chapter 7 is alone worth several times of the price of the book. for example few understand that the import statement adds the imported module or package to the current namespace, which may or may not be the global namespace. the author also covers the problem of "name masking" in this chapter.

Ability to write a large script using your own modules is a very important skill that few books teach. usually intro books on Python try to throw everything that language contains into the sink, creating problems for whose who study the language, even in cases when they already knew some other programming languages such as C++ or Perl. Ability not to cover some features of the language usually are complete absent in authors of such books.

Most of the authors of Python books talks a lot about how great Python is, but never explain why. this books explains probably the most important feature of this scripting language which makes is great (actually inhered from Modula 3). Also most intro books suffer from excessive fascination with OO (thanks God this fad is past its peak). This book does not.

Publishing of books that are devoted to important topics has great value as:you have nowhere to go to get information that it provides. But it is very risky business. Of cause if you are diligent you can collect this information by reading a dozen of book by extracting and organizing into some presentation relevant parts. But this is the work better reserved for personalities which corresponds to famous Sherlock Holms and it presuppose that you have pretty of time to do it. Usually meeting both of those two conditions is pretty unrealistic.

So it takes a certain about of courage to write a book devoted to a single specific feature of Python and the author should be commended for that.

That's why I highly recommend this book for anybody who is trying to learn the language. It really allow you to understand a single the most critical feature of the Python language.

The book contain 9 chapters. Here are the titles of those chapters:

1. Introducing Modular Programming
2. Writing Your First Modular Program
3. Using Modules and Packages
4. Using Modules for Real-World Programming
5. Working with Module Patterns
6. Creating Reusable Modules
7. Advanced Module Techniques
8. Testing and Deploying Modules
9. Modular Programming as a Foundation for Good Programming Technique

NOTE: In chapter 8 the author covers unrelated but an important topic about how to prepare your modules to publication and upload them to GitHub. Using GitHub became now very popular among Python programmers and the earlier you learn about this possibility the better.

Chapter 8 also covers important topic about installation of Python packages. But unfortunately the coverage is way to brief and does not cover gotchas that you might experience installing such packages as Numpy.

I would like to stress it again: currently the book has no competition in the level of coverage of this, probably the most important feature of Python language.

[Nov 04, 2017] Which is the best book for learning python for absolute beginners on their own?

Nov 04, 2017 |

Robert Love Software Engineer at Google

Mark Lutz's Learning Python is a favorite of many. It is a good book for novice programmers. The new fifth edition is updated to both Python 2.7 and 3.3. Thank you for your feedback! Your response is private. Is this answer still relevant and up to date?

Aditi Sharma , i love coding Answered Jul 10 2016

Originally Answered: Which is the best book for learning Python from beginners to advanced level?

Instead of book, I would advice you to start learning Python from CodesDope which is a wonderful site for starting to learn Python from the absolute beginning. The way its content explains everything step-by-step and in such an amazing typography that makes learning just fun and much more easy. It also provides you with a number of practice questions for each topic so that you can make your topic even stronger by solving its questions just after reading it and you won't have to go around searching for its questions for practice. Moreover, it has a discussion forum which is really very responsive in solving all your doubts instantly.

3.1k Views 11 Upvotes Promoted by Facebook Join Facebook Engineering Leadership. We're hiring! Join our engineering leadership team to help us bring the world closer together. Learn More at Alex Forsyth , Computer science major at MIT Answered Dec 28 2015 Originally Answered: What is the best way to learn to code? Specifically Python.

There are many good websites for learning the basics, but for going a bit deeper, I'd suggest MIT OCW 6.00SC. This is how I learned Python back in 2012 and what ultimately led me to MIT and to major in CS. 6.00 teaches Python syntax but also teaches some basic computer science concepts. There are lectures from John Guttag, which are generally well done and easy to follow. It also provides access to some of the assignments from that semester, which I found extremely useful in actually learning Python.

After completing that, you'd probably have a better idea of what direction you wanted to go. Some examples could be completing further OCW courses or completing projects in Python.

[Sep 18, 2017] Operators and String Formatting in Python Operators

Sep 18, 2017 |


Formatting Strings!Modulus

Although not actually modulus, the Python % operator works similarly in string formatting to interpolate variables into a formatting string. If you've programmed in C, you'll notice that % is much like C's printf(), sprintf(), and fprintf() functions.

There are two forms of %, one of which works with strings and tuples, the other with dictionaries.

StringOperand % TupleOperand 

StringOperand % DictionaryOperand

Both return a new formatted string quickly and easily.

% Tuple String Formatting

In the StringOperand % TupleOperand form, StringOperand represents special directives within the string that help format the tuple. One such directive is %s, which sets up the format string

>>> format = "%s is my friend and %s is %s years old"

and creates two tuples, Ross_Info and Rachael_Info.

>>> Ross_Info = ("Ross", "he", 28)

>>> Rachael_Info = ("Rachael", "she", 28)

The format string operator (%) can be used within a print statement, where you can see that every occurrence of %s is respectively replaced by the items in the tuple.

>>> print (format % Ross_Info) 

Ross is my friend and he is 28 years old 

>>> print (format % Rachael_Info) 

Rachael is my friend and she is 28 years old

Also note that %s automatically converts the last item in the tuple to a reasonable string representation. Here's an example of how it does this using a list:

>>> bowling_scores = [190, 135, 110, 95, 195]

>>> name = "Ross"

>>> strScores = "%s's bowling scores were %s" \

...                                                 % (name, bowling_scores) 

>>> print strScores 

Ross's bowling scores were [190, 135, 110, 95, 195]

First, we create a list variable called bowling_scores and then a string variable called name. We then use a string literal for a format string (StringOperand) and use a tuple containing name and bowling_scores.

Format Directives

Table 3–6 covers all of the format directives and provides a short example of usage for each. Note that the tuple argument containing a single item can be denoted with the % operator as item, or (item).

Table 3–6 Format Directives
Directive Description Interactive Session
%s Represents a value as a string >>> list = ["hi", 1, 1.0, 1L]
>>> "%s" % list
"['hi', 1, 1.0, 1L]"
>>> "list equals %s" % list
"list equals ['hi', 1, 1.0, 1L]"
%i Integer >>> "i = %i" % (5)
'i = 5'
>>> "i = %3i" % (5)
'i = 5'
%d Decimal integer >>> "d = %d" % 5
'd = 5'
>>> "%3d" % (3)
' 3'
%x Hexadecimal integer >>> "%x" % (0xff)
>>> "%x" % (255)
%x Hexadecimal integer >>> "%x" % (0xff)
>>> "%x" % (255)
%o Octal integer >>> "%o" % (255)
>>> "%o" % (0377)
%u Unsigned integer >>> print "%u" % -2000
>>> print "%u" % 2000
%e Float exponent >>> print "%e" % (30000000L)
>>> "%5.2e" % (300000000L)
%f Float >>> "check = %1.2f" % (3000)
'check = 3000.00'
>>> "payment = $%1.2f" % 3000
'payment = $3000.00'
%g Float exponent >>> "%3.3g" % 100
>>> "%3.3g" % 1000000000000L
>>> "%g" % 100
%c ASCII character >>> "%c" % (97)
>>> "%c" % 97
>>> "%c" % (97)

Table 3–7 shows how flags can be used with the format directives to add leading zeroes or spaces to a formatted number. They should be inserted immediately after the %.

Table 3–7 Format Directive Flags
Flag Description Interactive Session
# Forces octal to have a 0 prefix; forces hex to >>> "%#x" % 0xff
have a 0x prefix '0xff'
>>> "%#o" % 0377
+ Forces a positive number to have a sign >>> "%+d" % 100
- Left justification (default is right) >>> "%-5d, %-5d" % (10,10)
'10 , 10 '
" " Precedes a positive number with a blank space >>> "% d,% d" % (-10, 10)
0 0 padding instead of spaces >>> "%05d" % (100,)

Advanced Topic: Using the %d, %i, %f, and %e Directives for Formatting Numbers

The % directives format numeric types: %i works with Integer; %f and %e work with Float with and without scientific notation, respectively.

>>> "%i, %f, %e" % (1000, 1000, 1000) 

'1000, 1000.000000, 10.000000e+002'

Notice how awkward all of those zeroes look. You can limit the length of precision and neaten up your code like this:

>>> "%i, %2.2f, %2.2e" % (1000, 1000, 1000) 

'1000, 1000.00, 10.00e+002'

The %2.2f directive tells Python to format the number as at least two characters and to cut the precision to two characters after the decimal point. This is useful for printing floating-point numbers that represent currency.

>>> "Your monthly payments are $%1.2f" % (payment) 

'Your monthly payments are $444.43'

All % directives have the form %min.precision(type), where min is the minimum length of the field, precision is the length of the mantissa (the numbers on the right side of the decimal point), and type is the type of directive (e, f, i, or d). If the precision field is missing, the directive can take the form %min(type), so, for example, %5d ensures that a decimal number has at least 5 fields and %20f ensures that a floating-point number has at least 20.

Let's look at the use of these directives in an interactive session.

>>> "%5d" % (100,) 

' 100' 

>>> "%20f" % (100,) 

' 100.000000'

Here's how to truncate the float's mantissa to 2 with %20.2f.

>>> "%20.2f" % (100,) 

' 100.00'

The padding that precedes the directive is useful for printing rows and columns of data for reporting because it makes the printed output easy to read. This can be seen in the following example (from ):

     # Create two rows

row1 = (100, 10000, 20000, 50000, 6000, 6, 5) 

row2 = (1.0, 2L, 5, 2000, 56, 6.0, 7) 


      # Print out the rows without formatting 

print "here is an example of the columns not lining up" 

print ´row1´ + "\n" + ´row2´ 



      # Create a format string that forces the number

      # to be at least 3 characters long to the left

      # and 2 characters to the right of the decimal point

format = "(%3.2e, %3.2e, %3.2e, %3.2e, " + \ "%3.2e, %3.2e, %3.2e)" 


      # Create a string for both rows

      # using the format operator

strRow1 = format % row1 

strRow2 = format % row2 

print "here is an example of the columns" + \ 

        " lining up using \%e" 

print strRow1 + "\n" + strRow2 


      # Do it again this time with the %i and %d directive 

format1 = "(%6i, %6i, %6i, %6i, %6i, %6i, %6i)" 

format2 = "(%6d, %6d, %6d, %6d, %6d, %6d, %6d)" 

strRow1 = format1 % row1 

strRow2 = format2 % row2 

print "here is an example of the columns" + \ 

        " lining up using \%i and \%d" 

print strRow1 + "\n" + strRow2 


here is an example of the columns not lining up 

(100, 10000, 20000, 50000, 6000, 6, 5) 

(1.0, 2L, 5, 2000, 56, 6.0, 7) 

here is an example of the columns lining up using \%e 

(1.00e+002, 1.00e+004, 2.00e+004, 5.00e+004, 6.00e+003, 6.00e+000, 5.00e+000) 

(1.00e+000, 2.00e+000, 5.00e+000, 2.00e+003, 5.60e+001, 6.00e+000, 7.00e+000) 

here is an example of the columns lining up using \%i and \%d 

( 100, 10000, 20000, 50000, 6000, 6, 5) 

(     1,         2,         5,   2000,     56, 6, 7)

You can see that the %3.2e directive permits a number to take up only three spaces plus the exponential whereas %6d and %6i permit at least six spaces. Note that %i and %d do the same thing that %e does. Most C programmers are familiar with %d but may not be familiar with %i, which is a recent addition to that language.

String % Dictionary

Another useful Python feature for formatting strings is StringOperand % Dictio-naryOperand. This form allows you to customize and print named fields in the string. %(Income)d formats the value referenced by the Income key. Say, for example, that you have a dictionary like the one here:

Monica = { 

                 "Occupation": "Chef",

                 "Name" : "Monica", 

                 "Dating" : "Chandler",

                 "Income" : 40000 


With %(Income)d, this is expressed as

>>> "%(Income)d" % Monica 


Now let's say you have three best friends, whom you define as dictionaries named Monica, Chandler, and Ross.

Monica = { 

                 "Occupation": "Chef",

                 "Name" : "Monica", 

                 "Dating" : "Chandler", 

                 "Income" : 40000 


Ross = 


                "Occupation": "Scientist Museum Dude",

                "Name" : "Ross", 

                "Dating" : "Rachael", 

                "Income" : 70000 


Chandler =              { 

                "Occupation": "Buyer",

                "Name" : "Chandler", 

                "Dating" : "Monica", 

                "Income" : 65000 


To write them a form letter, you can create a format string called message that uses all of the above dictionaries' keywords.

message = "%(Name)s, %(Occupation)s, %(Dating)s," \ 

                  " %(Income)2.2f"

Notice that %(Income)2.2f formats this with a floating-point precision of 2, which is good for currency. The output is

Chandler, Buyer, Monica, 65000.00 

Ross, Scientist Museum Dude, Rachael, 70000.00 

Monica, Chef, Chandler, 40000.00

You can then print each dictionary using the format string operator.

print message % Chandler 

print message % Ross 

print message % Monica

To generate your form letter and print it out to the screen, you first create a format string called dialog.

dialog = """ 

Hi %(Name)s, 

How are you doing? How is %(Dating)s? 

Are you still seeing %(Dating)s? 

How is work at the office? 

I bet it is hard being a %(Occupation)s. 

I know I could not do it. 


Then you print out each dictionary using the dialog format string with the % format string operator.

print dialog % Ross 

print dialog % Chandler 

print dialog % Monica

The output is

Hi Ross, 

How are you doing? How is Rachael? 

Are you still seeing Rachael? 

How is work at the office? 

I bet it is hard being a Scientist Museum Dude. 

I know I could not do it. 

Hi Chandler, 

How are you doing? How is Monica? 

Are you still seeing Monica? 

How is work at the office? 

I bet it is hard being a Buyer. 

I know I could not do it. 

Hi Monica, 

How are you doing? How is Chandler? 

Are you still seeing Chandler? 

How is work at the office? 

I bet it is hard being a Chef. 

I know I could not do it.

%(Income)d is a useful, flexible feature. You just saw how much time it can save you in writing form letters. Imagine what it can do for writing reports. < Back Page

[Sep 16, 2017] Is Python Really the Fastest-Growing Programming Language?

Sep 16, 2017 |

( 253 Posted by EditorDavid on Saturday September 09, 2017 @09:10PM from the is-simple-better-than-complex? dept. An anonymous reader quotes Stack Overflow Blog: In this post, we'll explore the extraordinary growth of the Python programming language in the last five years, as seen by Stack Overflow traffic within high-income countries.

The term "fastest-growing" can be hard to define precisely, but we make the case that Python has a solid claim to being the fastest-growing major programming language ... June 2017 was the first month that Python was the most visited [programming language] tag on Stack Overflow within high-income nations. This included being the most visited tag within the US and the UK, and in the top 2 in almost all other high income nations (next to either Java or JavaScript). This is especially impressive because in 2012, it was less visited than any of the other 5 languages, and has grown by 2.5-fold in that time .

Part of this is because of the seasonal nature of traffic to Java. Since it's heavily taught in undergraduate courses, Java traffic tends to rise during the fall and spring and drop during the summer. Does Python show a similar growth in the rest of the world, in countries like India, Brazil, Russia and China? Indeed it does.

Outside of high-income countries Python is still the fastest growing major programming language; it simply started at a lower level and the growth began two years later (in 2014 rather than 2012). In fact, the year-over-year growth rate of Python in non-high-income countries is slightly higher than it is in high-income countries ...

We're not looking to contribute to any "language war." The number of users of a language doesn't imply anything about its quality, and certainly can't tell you which language is more appropriate for a particular situation.

With that perspective in mind, however, we believe it's worth understanding what languages make up the developer ecosystem, and how that ecosystem might be changing. This post demonstrated that Python has shown a surprising growth in the last five years, especially within high-income countries.

The post was written by Stack Overflow data scientist David Robinson, who notes that "I used to program primarily in Python, though I have since switched entirely to R."

[Sep 03, 2017] Python Conquers The Universe

Notable quotes:
"... If you press ENTER without entering anything, pdb will re-execute the last command that you gave it. ..."
"... When you use "s" to step into subroutines, you will often find yourself trapped in a subroutine. You have examined the code that you're interested in, but now you have to step through a lot of uninteresting code in the subroutine. ..."
"... In this situation, what you'd like to be able to do is just to skip ahead to the end of the subroutine. That is, you want to do something like the "c" ("continue") command does, but you want just to continue to the end of the subroutine, and then resume your stepping through the code. ..."
"... You can do it. The command to do it is "r" (for "return" or, better, "continue until return"). If you are in a subroutine and you enter the "r" command at the (Pdb) prompt, pdb will continue executing until the end of the subroutine. At that point ! the point when it is ready to return to the calling routine ! it will stop and show the (Pdb) prompt again, and you can resume stepping through your code. ..."
Debugging in Python Posted on 2009/09/10 by Steve Ferg As a programmer, one of the first things that you need for serious program development is a debugger.

Python has a debugger, which is available as a module called pdb (for "Python DeBugger", naturally!). Unfortunately, most discussions of pdb are not very useful to a Python newbie ! most are very terse and simply rehash the description of pdb in the Python library reference manual . The discussion that I have found most accessible is in the first four pages of Chapter 27 of the Python 2.1 Bible .

So here is my own personal gentle introduction to using pdb. It assumes that you are not using any IDE ! that you're coding Python with a text editor and running your Python programs from the command line.

Some Other Debugger Resources Getting started ! pdb.set_trace()

To start, I'll show you the very simplest way to use the Python debugger.

1. Let's start with a simple program,

# -- experiment with the Python debugger, pdb

a = "aaa"

b = "bbb"

c = "ccc"

final = a + b + c

print final

2. Insert the following statement at the beginning of your Python program. This statement imports the Python debugger module, pdb.

import pdb

3. Now find a spot where you would like tracing to begin, and insert the following code:


So now your program looks like this.

# -- experiment with the Python debugger, pdb

import pdb

a = "aaa"


b = "bbb"

c = "ccc"

final = a + b + c

print final

4. Now run your program from the command line as you usually do, which will probably look something like this:

PROMPT> python

When your program encounters the line with pdb.set_trace() it will start tracing. That is, it will (1) stop, (2) display the "current statement" (that is, the line that will execute next) and (3) wait for your input. You will see the pdb prompt, which looks like this:

Execute the next statement with "n" (next)

At the (Pdb) prompt, press the lower-case letter "n" (for "next") on your keyboard, and then press the ENTER key. This will tell pdb to execute the current statement. Keep doing this ! pressing "n", then ENTER.

Eventually you will come to the end of your program, and it will terminate and return you to the normal command prompt.

Congratulations! You've just done your first debugging run!

Repeating the last debugging command with ENTER

This time, do the same thing as you did before. Start your program running. At the (Pdb) prompt, press the lower-case letter "n" (for "next") on your keyboard, and then press the ENTER key.

But this time, after the first time that you press "n" and then ENTER, don't do it any more. Instead, when you see the (Pdb) prompt, just press ENTER. You will notice that pdb continues, just as if you had pressed "n". So this is Handy Tip #1:

If you press ENTER without entering anything, pdb will re-execute the last command that you gave it.

In this case, the command was "n", so you could just keep stepping through the program by pressing ENTER.

Notice that as you passed the last line (the line with the "print" statement), it was executed and you saw the output of the print statement ("aaabbbccc") displayed on your screen.

Quitting it all with "q" (quit)

The debugger can do all sorts of things, some of which you may find totally mystifying. So the most important thing to learn now ! before you learn anything else ! is how to quit debugging!

It is easy. When you see the (Pdb) prompt, just press "q" (for "quit") and the ENTER key. Pdb will quit and you will be back at your command prompt. Try it, and see how it works.

Printing the value of variables with "p" (print)

The most useful thing you can do at the (Pdb) prompt is to print the value of a variable. Here's how to do it.

When you see the (Pdb) prompt, enter "p" (for "print") followed by the name of the variable you want to print. And of course, you end by pressing the ENTER key.

Note that you can print multiple variables, by separating their names with commas (just as in a regular Python "print" statement). For example, you can print the value of the variables a, b, and c this way:

p a, b, c
When does pdb display a line?

Suppose you have progressed through the program until you see the line

final = a + b + c

and you give pdb the command

p final

You will get a NameError exception. This is because, although you are seeing the line, it has not yet executed. So the final variable has not yet been created.

Now press "n" and ENTER to continue and execute the line. Then try the "p final" command again. This time, when you give the command "p final", pdb will print the value of final , which is "aaabbbccc".

Turning off the (Pdb) prompt with "c" (continue)

You probably noticed that the "q" command got you out of pdb in a very crude way ! basically, by crashing the program.

If you wish simply to stop debugging, but to let the program continue running, then you want to use the "c" (for "continue") command at the (Pdb) prompt. This will cause your program to continue running normally, without pausing for debugging. It may run to completion. Or, if the pdb.set_trace() statement was inside a loop, you may encounter it again, and the (Pdb) debugging prompt will appear once more.

Seeing where you are with "l" (list)

As you are debugging, there is a lot of stuff being written to the screen, and it gets really hard to get a feeling for where you are in your program. That's where the "l" (for "list") command comes in. (Note that it is a lower-case "L", not the numeral "one" or the capital letter "I".)

"l" shows you, on the screen, the general area of your program's souce code that you are executing. By default, it lists 11 (eleven) lines of code. The line of code that you are about to execute (the "current line") is right in the middle, and there is a little arrow "–>" that points to it.

So a typical interaction with pdb might go like this

Stepping into subroutines with "s" (step into)

Eventually, you will need to debug larger programs ! programs that use subroutines. And sometimes, the problem that you're trying to find will lie buried in a subroutine. Consider the following program.

# -- experiment with the Python debugger, pdb

import pdb

def combine(s1,s2):      # define subroutine combine, which...

    s3 = s1 + s2 + s1    # sandwiches s2 between copies of s1, ...

    s3 = '"' + s3 +'"'   # encloses it in double quotes,...

    return s3            # and returns it.

a = "aaa"


b = "bbb"

c = "ccc"

final = combine(a,b)

print final

As you move through your programs by using the "n" command at the (Pdb) prompt, you will find that when you encounter a statement that invokes a subroutine ! the final = combine(a,b) statement, for example ! pdb treats it no differently than any other statement. That is, the statement is executed and you move on to the next statement ! in this case, to print final.

But suppose you suspect that there is a problem in a subroutine. In our case, suppose you suspect that there is a problem in the combine subroutine. What you want ! when you encounter the final = combine(a,b) statement ! is some way to step into the combine subroutine, and to continue your debugging inside it.

Well, you can do that too. Do it with the "s" (for "step into") command.

When you execute statements that do not involve function calls, "n" and "s" do the same thing ! move on to the next statement. But when you execute statements that invoke functions, "s", unlike "n", will step into the subroutine. In our case, if you executed the

final = combine(a,b)

statement using "s", then the next statement that pdb would show you would be the first statement in the combine subroutine:

def combine(s1,s2):

and you will continue debugging from there.

Continuing but just to the end of the current subroutine with "r" (return)

When you use "s" to step into subroutines, you will often find yourself trapped in a subroutine. You have examined the code that you're interested in, but now you have to step through a lot of uninteresting code in the subroutine.

In this situation, what you'd like to be able to do is just to skip ahead to the end of the subroutine. That is, you want to do something like the "c" ("continue") command does, but you want just to continue to the end of the subroutine, and then resume your stepping through the code.

You can do it. The command to do it is "r" (for "return" or, better, "continue until return"). If you are in a subroutine and you enter the "r" command at the (Pdb) prompt, pdb will continue executing until the end of the subroutine. At that point ! the point when it is ready to return to the calling routine ! it will stop and show the (Pdb) prompt again, and you can resume stepping through your code.

You can do anything at all at the (Pdb) prompt

Sometimes you will be in the following situation ! You think you've discovered the problem. The statement that was assigning a value of, say, "aaa" to variable var1 was wrong, and was causing your program to blow up. It should have been assigning the value "bbb" to var1.

at least, you're pretty sure that was the problem

What you'd really like to be able to do, now that you've located the problem, is to assign "bbb" to var1, and see if your program now runs to completion without bombing.

It can be done!

One of the nice things about the (Pdb) prompt is that you can do anything at it ! you can enter any command that you like at the (Pdb) prompt. So you can, for instance, enter this command at the (Pdb) prompt.

(Pdb) var1 = "bbb"

You can then continue to step through the program. Or you could be adventurous ! use "c" to turn off debugging, and see if your program will end without bombing!

but be a little careful!

[Thanks to Dick Morris for the information in this section.]

Since you can do anything at all at the (Pdb) prompt, you might decide to try setting the variable b to a new value, say "BBB", this way:

(Pdb) b = "BBB"

If you do, pdb produces a strange error message about being unable to find an object named '= "BBB" '. Why???

What happens is that pdb attempts to execute the pdb command for setting and listing breakpoints (a command that we haven't discussed). It interprets the rest of the line as an argument to the command, and can't find the object that (it thinks) is being referred to. So it produces an error message.

So how can we assign a new value to ? The trick is to start the command with an exclamation point (!).

(Pdb)!b = "BBB"

An exclamation point tells pdb that what follows is a Python statement, not a pdb command.

The End

Well, that's all for now. There are a number of topics that I haven't mentioned, such as help, aliases, and breakpoints. For information about them, try the online reference for pdb commands on the Python documentation web site. In addition, I recommend Jeremy Jones' article Interactive Debugging in Python in O'Reilly's Python DevCenter.

I hope that this introduction to pdb has been enough to get you up and running fairly quickly and painlessly. Good luck!

!Steve Ferg

Sep 03, 2017 |

[Dec 26, 2016] PyCharm - The Best Linux Python IDE

Dec 26, 2016 |
by Gary Newell Updated September 23, 2016 Introduction

In this guide I will introduce you to the PyCharm integrated development environment which can be used to develop professional applications using the Python programming language.

Python is a great programming language because it is truly cross platform and can be used to develop a single application which will run on Windows, Linux and Mac computers without having to recompile any code.

PyCharm is an editor and debugger developed by Jetbrains who are the same people who developed Resharper which is a great tool used by Windows developers for refactoring code and to make their lives easier when writing .NET code. Many of the principles of Resharper have been added to the professional version of PyCharm .

How To Install PyCharm

I have written a guide showing how to get PyCharm, download it, extract the files and run it.

Simply click this link .

The Welcome Screen

When you first run PyCharm or when you close a project you will be presented with a screen showing a list of recent projects.

You will also see the following menu options:

There is also a configure settings option which lets you set up the default Python version and other such settings.

Creating A New Project

When you choose to create a new project you are provided with a list of possible project types as follows:

This isn't a programming tutorial so I won't be listing what all of those project types are. If you want to create a base desktop application which will run on Windows, Linux and Mac then you can choose a Pure Python project and use QT libraries to develop graphical applications which look native to the operating system they are running on regardless as to where they were developed.

As well as choosing the project type you can also enter the name for your project and also choose the version of Python to develop against.

Open A Project

You can open a project by clicking on the name within the recently opened projects list or you can click the open button and navigate to the folder where the project you wish to open is located.

Checking Out From Source Control

PyCharm provides the option to check out project code from various online resources including GitHub , CVS , Git, Mercurial and Subversion .

The PyCharm IDE

The PyCharm IDE starts with a menu at the top and underneath this you have tabs for each open project.

On the right side of the screen are debugging options for stepping through code.

The left pane has a list of project files and external libraries.

To add a file you right-click on the project name and choose "new". You then get the option to add one of the following file types:

When you add a file, such as a python file you can start typing into the editor in the right panel.

The text is all colour coded and has bold text . A vertical line shows the indentation so you can be sure that you are tabbing correctly.

The editor also includes full intellisense which means as you start typing the names of libraries or recognised commands you can complete the commands by pressing tab.

Debugging The Application

You can debug your application at any point by using the debugging options in the top right corner.

If you are developing a graphical application then you can simply press the green button to run the application. You can also press shift and F10.

To debug the application you can either click the button next to the green arrow or press shift and F9.You can place breakpoints in the code so that the program stops on a given line by clicking in the grey margin on the line you wish to break at.

To make a single step forward you can press F8 which steps over the code. This means it will run the code but it won't step into a function. To step into the function you would press F7. If you are in a function and want to step out to the calling function press shift and F8.

At the bottom of the screen whilst you are debugging you will see various windows such as a list of processes and threads, and variables that you are watching the values for.

As you are stepping through code you can add a watch on a variable so that you can see when the value changes.

Another great option is to run the code with coverage checker. The programming world has changed a lot during the years and now it is common for developers to perform test-driven development so that every change they make they can check to make sure they haven't broken another part of the system.

The coverage checker actually helps you to run the program, perform some tests and then when you have finished it will tell you how much of the code was covered as a percentage during your test run.

There is also a tool for showing the name of a method or class, how many times the items were called, and how long was spent in that particular piece of code.

Code Refactoring

A really powerful feature of PyCharm is the code refactoring option.

When you start to develop code little marks will appear in the right margin. If you type something which is likely to cause an error or just isn't written well then PyCharm will place a coloured marker.

Clicking on the coloured marker will tell you the issue and will offer a solution.

For example, if you have an import statement which imports a library and then don't use anything from that library not only will the code turn grey the marker will state that the library is unused.

Other errors that will appear are for good coding such as only having one blank line between an import statement and the start of a function. You will also be told when you have created a function that isn't in lowercase.

You don't have to abide by all of the PyCharm rules. Many of them are just good coding guidelines and are nothing to do with whether the code will run or not.

The code menu has other refactoring options. For example,​ you can perform code cleanup and you can inspect a file or project for issues.


PyCharm is a great editor for developing Python code in Linux and there are two versions available. The community version is for the casual developer whereas the professional environment provides all the tools a developer could need for creating professional software.

[Dec 26, 2016] Python 3.6 Released

Dec 26, 2016 |
( 166

Posted by EditorDavid on Saturday December 24, 2016 @10:34AM from the batteries-included dept.

On Friday, more than a year after Python 3.5, core developers Elvis Pranskevichus and Yury Selivanov announced the release of version 3.6 .

An anonymous reader writes:

InfoWorld describes the changes as async in more places, speed and memory usage improvements, and pluggable support for JITs, tracers, and debuggers. "Python 3.6 also provides support for DTrace and SystemTap, brings a secrets module to the standard library [to generate authentication tokens], introduces new string and number formats, and adds type annotations for variables. It also gives us easier methods to customize the creation of subclasses."

You can read Slashdot's interview with Python creator Guido van Rossum from 2013.

I also remember an interview this July where Perl creator Larry Wall called Python " a pretty okay first language , with a tendency towards style enforcement, monoculture, and group-think...

more interested in giving you one adequate way to do something than it is in giving you a workshop that you, the programmer, get to choose the best tool from."

Anyone want to share their thoughts today about the future of Python?

[Dec 29, 2015] How to install Python3 on CentOS - Ask Xmodulo by Dan Nanni


Method Two: Install Python3 from EPEL Repository

The latest EPEL 7 repository offers python3 (python 3.4 to be exact). Thus if you are using CentOS 7 or later, you can easily install python3 by enabling EPEL repository as follows.

$ sudo yum install epel-release

Then install python 3.4 and its libraries using yum:

$ sudo yum install python34

Note that this will not install matching pip. To install pip and setuptools, you need to install them separately as follows.

$ curl -O
$ sudo /usr/bin/python3.4

Method Three: Install Python3 from Software Collections (SCL)

Another way to install python3 is via enabling Software Collections (SCL) repository. The SCL repository is available for CentOS 6.5 or later, and the latest SCL offers python 3.3. Once you enable the SCL repository, go ahead and install python3 as follows.

$ sudo yum install python33

To use python3 from the SCL, you need to enable python3 on a per-command basis as follows.

$ scl enable python33 <command>

You can also invoke a bash shell with python3 enabled as the default Python interpreter:

[Dec 12, 2015] 11 New Open Source Development Tools



Short for "Yet Another Python Formatter," YAPF reformats Python code so that it conforms to the style guide and looks good. It's a Google-owned project. Operating System: OS Independent

Python Execute Unix - Linux Command Examples

The os.system has many problems and subprocess is a much better way to executing unix command. The syntax is:
import subprocess"command1")["command1", "arg1", "arg2"]) 

In this example, execute the date command:

import subprocess"date")

Sample outputs:

Sat Nov 10 00:59:42 IST 2012

You can pass the argument using the following syntax i.e run ls -l /etc/resolv.conf command:

import subprocess["ls", "-l", "/etc/resolv.conf"])

Sample outputs:

<-rw-r--r-- 1 root root 157 Nov  7 15:06 /etc/resolv.conf

[Aug 20, 2015] Fluent Python

Paperback: 770 pages
Publisher: O'Reilly Media; 1 edition (August 20, 2015)
Language: English
ISBN-10: 1491946008
ISBN-13: 978-1491946008
Product Dimensions: 9.2 x 7 x 1.4 inches

Jascha Casadio on October 30, 2015

An excellent text covering very advanced Python features.

Among the books that are currently populating my Goodread's wishlist are no less than 20 titles dedicated to the Python language. They range from Django up to pandas, passing through Twisted and Test-Driver Development. Time is limited, so they often end up waiting in queue for months. But when I've seen Fluent Python on that shelf I had to make it mine immediately and put it in front of that queue. Getting through this book took me several months, not only because we are talking about some 700 hundreds good pages, but mostly due to the fact that it covers advanced topics that most of the Pythonists currently living on planet Earth never heard of in their life. Fluent Python is one of those books that you must taste little by little or you get devoured by those fierce topics and examples.

Released late this summer, Fluent Python is the latest work of Ramalho, a name that should sound familiar to those that have been already diving deeply into, allow me the term, Python's high-end features, powerful things, such as coroutines, that most developers never heard of in their life. Those that did probably hope never being tested on them during a job interview. And that's pretty much what the book is all about. Neither style nor the the basics of the language, but very advanced features. Quite a rare book indeed, since almost all of the Python books available introduce the readers to the language and don't get past Object Oriented Programming.

An excellent text overall, no doubts. Not for the faint of heart. Still, I am a bit puzzled by the fact that some chapters look extremely simple, others cover quirks and intricacies that you can probably live without, unless you dare touching the very core of the language ,and that get you to reach the end of a chapter with that what the hell expression on your face. The chapter covering abstract classes is an example of the former. Don't get me wrong, it's interesting and the examples well laid out. Still, it looks a basic concept that doesn't fit this kind of book.

A couple of words on the examples: they are throughout the whole book well done. The author often presents the same concepts in different flavors or does work on the same example and improves it as concepts are taken into the discussion. The code is intense but easy to follow. Key lines are extensively explained later on, so that the reader won't miss that specific features that makes it all possible. There are so many gems that you will probably end up writing most of that code down to make it yours. This is actually the best thing the reader can do. Try it, modify it, assimilate it, master it.

Among the many topics covered there are two that are worth mentioning: the first is chapter four, which covers strings, Unicode and bytes. Marvelous, simply marvelous. The examples, the explanations. So clear and to the point. You definitely get away from it with a deep understanding of how strings work in Python 2.7 and 3.

The second is that dedicated to futures. Actually it's the whole topic, which spans several chapters at the very end of the book. The authors shows how working with threads and subprocesses improve the efficiency of an application, and how easy it is to exploit them through the futures that are now available in the language. He does gives us a very interesting example in many different flavors, showing us how the code and performance change. Great.

Decorators and closures are also well described, even if not as good as the aforementioned topics. In that sense, the author does complement what we find about the subject in Effective Python: 59 Specific Ways to Write Better Python, another must have for any serious Pythonist.

Overall, a great Python book. A must have for any Python developer interested in getting the most out of the language.

As usual, you can find more reviews on my personal blog: . Feel free to pass by and share your thoughts!

[Dec 04, 2014] Introducing Python Modern Computing in Simple Packages by Bill Lubanovic

This is a good intro book that covered Python 3 ! An excellent book for beginners.The opening chapters of this book provide a very good overview of Python syntax, methods and structures, delivered with wit. Examples range from simple to quite complex.
Notable quotes:
"... The author didn't do a good job in wrapping the teaching around practical examples for a huge portion of the book, but rather, resorted to the lazy approach many coding-book authors take: plop some super-synthetic code on the page and explain it. They have no connection to reality. ..."

Paperback: 478 pages
Publisher: O'Reilly Media; 1 edition (December 4, 2014)
Language: English
ISBN-10: 1449359361
ISBN-13: 978-1449359362
Product Dimensions: 7 x 1 x 9.2 inches

About the Author

Bill Lubanovic has developed software with UNIX since 1977, GUIs since 1981, databases since 1990, and the Web since 1993.

At a startup named Intran in 1982, he developed MetaForm -- one of the first commercial GUIs (before the Mac or Windows), on one of the first graphic workstations. At Northwest Airlines in the early 1990s, he wrote a graphic yield management system that generated millions of dollars in revenue; got the company on the Internet; and wrote its first Internet marketing test. He co-founded an ISP (Tela) in 1994, and a web development company (Mad Scheme) in 1999.

Recently, he developed core services and distributed systems with a remote team for a Manhattan startup. Currently, he's integrating OpenStack services for a supercomputer company.

He enjoys life in Minnesota with his wonderful wife Mary, children Tom and Karin, and cats Inga, Chester, and Lucy.

ByTS.on August 23, 2015

another book with useless synthetic examples

The little code snippets that the author provides are useful to understand the commands, but without putting it in the context of some useful practical code, it makes the book very dry. Although I understood most of the concepts from the snippets, I was dragging myself through the book since it was so boring to read.

The author didn't do a good job in wrapping the teaching around practical examples for a huge portion of the book, but rather, resorted to the lazy approach many coding-book authors take: plop some super-synthetic code on the page and explain it. They have no connection to reality.

The only reason I am not giving a single star is because the book was cheap and I learned "some" stuff. But these are all the wrong reasons to like a book for !

Alfredzo Nash on March 4, 2015

An Edible Introduction to Python

Reviewed by Alfredzo Nash, Fairfield County Area Datto Linux User Group.

"Introducing Python" by Bill Lubanovic is an edible recipe for learning Python. I had limited exposure to Python (I completed 50% of the Codecademy training) prior to reading "Introducing Python". Lubanovic is ab iron chef that has written this delectable book.

Beginning with Chapters 1-5 (1. A Taste of Python, 2. Python Ingredients, 3. PyFilling, 4. PyCrust, and 5. PyBoxes), the reader starts to build their mental palate using a small cookbook style dialog for making an actual pie! This was very informative and one of the best analogies within the book. Lubanovic skillfully separates the importance of data structure such as list, dictionaries, tuples, and set from code structure (commenting, if elif else, utf-8 encoding, pep, etc.) which allowed me to become familiar with the essential building blocks of the Python language. "Introducing Python" is a very definitive guide to what makes Python a valuable and powerful language.

Since Python 2.7's end-of-life is scheduled in 2020, Lubanovic encouraged Pythonistas to begin writing Python code using Python 3 instead of 2.7. Being relatively new to the language, I couldn't resist the urge to discover more about the differences between the two versions. At the time of this writing, there are still subtle differences between Python 2.7 and Python 3. The changes include how to call the print() function, and the handling of Unicode characters. Most of the differences between version 2.7 and 3.0 can be found at

Unfortunately, Chapters 6-9 (6. Oh Oh: Objects and Classes, 7. Mangle Data Like a Pro, 8. Data Has to Go Somewhere, 9. The Web Untangled) proved difficult for me to comprehend due to my inexperience. Lubanovic attempts to ease the reader into this section skillfully in Chapter 6, with analogies such as "In Chapter 1, I compare an object to a plastic box. A class is like the mold that makes that box." I couldn't agree with this analogy as I had to learn the complex object-oriented terminology, like polymorphism, instantiation and inheritance, discussed throughout this chapter.

Chapters 7-8 puts "Introducing Python" into high gear. I did not take the term "Pro" lightly within this section. Essentially, I now understand why data scientist, forensic analysts, cloud engineers, and automation engineers consider Python for their projects. Data can take many forms that include text strings, ASCII or binary. These types of data need to be written, read, encoded, decoded and stored properly. Lubanovic clearly conveyed the message that data should be handled with precision. UTF-8 encoding, byte arrays, and matching regular expression all took some time for me to understand. There are tons of options that include storing data between RAM, CSVs, JSON, or SQL and NoSQL Databases. Both mangling and storing data requires a certain level of mastery that has convinced me to keep Introducing Python as a reminder to sharpen my skills in this area of expertise.

Chapters 9-12 (9.The Web, Untangled, 10. Systems, 11. Concurrency and Networks,12. Be a Pythonista), are my favorite chapters in Introducing Python. Chapter 9 explains how Python's standard Web libraries handle the various components of the Web. It describes mainly the http and urlib packages, but also makes notable references to web frameworks such as Bottle and Flask. Chapter 10 dived into how Python handles files, directories, processes and time using the os (Operating System) module. The os module contains the functions copy(),chown(), remove(), mkdir(),rmdir() and many more that operate the same way as their Linux/UNIX counterparts.

Chapter 11 reinforces Chapter 10's section on process management via the os module and using multiprocessing. Diving deeper into the concurrency standard library ( within Python, Lubanovic discusses the differences between queues, process, and threads, as well as the tools to manage them on a single machine or multiple machines. Noticabley, Redis, gevent, asyncio, and twisted are mentioned as additional tools that handle concurrency within Python as well. In the Networks section, Lubanovic explores the plumbing provided by the numerous Internet Services supported within Python. TCP/IP, Sockets, Web APIs (fabric), RPCs and Message Brokers (RabbitMQ /ZeroMQ) are all covered in great detail. This chapter is an invaluable resource for anyone interested in gaining a more in-depth knowledge of how use Python to manage concurrency across distributed systems and networks.

Chapter 12 sets the example for ALL Python users, by encouraging them to "Be A Pythonista". Noticeably, Being a Pythonista is no small task. The requirements consist of: Finding Code, Installing Packages, Documenting Your Code, Testing Your code, Optimizing your code, and Managing your code via Source Control Whew... Humbly, I accept this challenge and hope to become a proficient Pythonista and encourage everyone interested in Python to do the same.

In closing, Introducing Python was remarkable read for me. I recommend studying this book and holding onto for future reference in real-world scenarios.

[Feb 09, 2014] build - How to package Python as RPM for install into -opt

Stack Overflow

Q: How to create a binary RPM package out of Python 2.7.2 sources for installation into a non-standard prefix such as /opt/python27?

Assume the following builds correctly.

tar zxvf Python-2.7.2.tgz
cd Python-2.7.2
./configure --prefix=/opt/python27 --enable-shared
make test
sudo make install

Instead of the last command I'd like to build a binary RPM.


RPMs are built using rpmbuild from a .spec file. As an example, look at python.spec from Fedora.

If you don't need to build from sources then try rpm's --relocate switch on a pre-built RPM for your distribution:

rpm -i --relocate /usr=/opt/python27 python-2.7.rpm

Python 2.7 RPMs - Gitorious

Python 2.7 RPMs

Port of the Fedora 15 Python 2.7 RPM and some of the related stack to build on RHEL 5 & 6 (and derivatives such as CentOS). Can be installed in parallel to the system Python packages.

[Feb 09, 2014] Building and Installing Python 2.7 RPMs on CentOS 5.7 by Nathan Milford

I was asked today to install Python 2.7 on a CentOS based node and I thought I'd take this oportunity to add a companion article to my Python 2.6 article.

We're all well aware that CentOS is pretty backwards when it comes to having the latest and greatest sotware packages and is particularly finicky when it comes to Python since so much of RHEL depends on it.

As a rule, I refuse to rush in and install anything in production that isn't in a manageable package format such as RPM. I need to be able to predictably reproduce software installs across a large number of nodes.

The following steps will not clobber your default Python 2.4 install and will keep both CentOS and your developers happy.

So, here we go.

Install the dependancies.

sudo yum -y install rpmdevtools tk-devel tcl-devel expat-devel db4-devel \
                    gdbm-devel sqlite-devel bzip2-devel openssl-devel \
                    ncurses-devel readline-devel

Setup you RPM build envirnoment.


Grab my spec file.

wget \
     -O ~/rpmbuild/SPECS/python27-2.7.2.spec 
wget \
     -O ~/rpmbuild/SOURCES/Python-2.7.2.tar.bz2

Build RPM. (FYI, the QA_RPATHS variable tells the rpmbuild to skip some file path errors).

QA_RPATHS=$[ 0x0001|0x0010 ] rpmbuild -bb ~/rpmbuild/SPECS/python-2.7.2.spec

Install the RPMs.

sudo rpm -Uvh ~/rpmbuild/RPMS/x86_64/python27*.rpm

Now on to the the setuptools.

Grab my spec file.

wget \
     -O ~/rpmbuild/SPECS/python27-setuptools-0.6c11.spec 

Grab the source.

wget \
     -O ~/rpmbuild/SOURCES/setuptools-0.6c11.tar.gz

Build the RPMs.

rpmbuild -bb ~/rpmbuild/SPECS/python27-setuptools-0.6c11.spec

Install the RPMs.

sudo rpm -Uvh ~/rpmbuild/RPMS/noarch/python27-setuptools-0.6c11-milford.noarch.rpm

Now, we'll install MySQL-python as an example.

Grab the mysql-dev package

yum -y install mysql-devel

Grab, build and install the MySQL-python package.

curl | tar zxv
cd MySQL-python-1.2.3
python2.7 build
python2.7 install

Like with the previous Python 2.6 article, note how I called the script explicitly using the following python binary: /usr/bin/python2.7

Now we're good to give it the old test thus:

python2.7 -c "import MySQLdb"

If it doesn't puke out some error message, you're all set.

Happy pythoning.

List of Python software


[Nov 01, 2012] Python for Data Analysis

Paperback: 466 pages
Publisher: O'Reilly Media; 1 edition (November 1, 2012)
Language: English
ISBN-10: 1449319793
ISBN-13: 978-1449319793
Product Dimensions: 7 x 0.9 x 9.2 inches

R. Friesel Jr. on October 22, 2012

dive into pandas and NumPy

Wes McKinney's "Python for Data Analysis" (O'Reilly, 2012) is a tour pandas and NumPy (mostly pandas) for folks looking to crunch "big-ish" data with Python. The target audience is not Pythonistas, but rather scientists, educators, statisticians, financial analysts, and the rest of the "non-programmer" cohort that is finding more and more these days that it needs to do a little bit-sifting to get the rest of their jobs done.

First, two warnings:

1. **This book is not an introduction to Python.** While McKinney does not assume that you know *any* Python, he isn't exactly going to hold your hand on the language here. There is an appendix ("Python Language Essentials") that beginners will want to read before getting too far, but otherwise you're on your own. ("Lucky for you Python is executable pseudocode"?)

2. **This book is not about theories of data analysis.** What I mean by that is: if you're looking for a book that is going to tell you the *types* of analyses to do, this is not that book. McKinney assumes that you already know, through your "actual" training, what kinds of analyses you need to perform on your data, and how to go about the computations necessary for those analyses.

That being said: McKinney is the principal author on pandas, a Python package for doing data transformation and statistical analysis. The book is largely about pandas (and NumPy), offering overviews of the utilities in these packages, and concrete examples on how to employ them to great effect. In examining these libraries, McKinney also delves into general methodologies for munging data and performing analytical operations on them (e.g., normalizing messy data and turning it into graphs and tables). McKinney also delves into some (semi) esoteric information about how Python works at very low levels and ways to optimize data structures so that you can get maximum performance from your programs. McKinney is clearly knowledgeable about these libraries, about Python, and about using those tools effectively in analytical software.

So where do I land on "Python for Data Analysis"? If you're looking for a book that discusses data analysis in a broad sense, or one that pays special attention to the theory, this isn't that book. If you're looking for a generalist's book on Python--also not this book. However, if you've already selected Python as your analytical tool (and it sounds like it's more/less the de facto analytical tool in many circles) then this just might be the perfect book for you.

[Oct 21, 2012] Last File Manager

Written in Python. The last version is LFM 2.3 dated May 2011. Codebase is below 10K lines.
Lfm is a curses-based file manager for the Unix console written in Python

21 May 2011

Python 2.5 or later is required now. PowerCLI was added, an advanced command line interface with completion, persistent history, variable substitution, and many other useful features.

Persistent history in all forms was added. Lots of improvements were made and bugs were fixed

[Aug 23, 2012] Think Python by Allen B. Downey

Notable quotes:
"... "Think Python" is available online ([...]) which means you can decide if you like it first. ..."
"... most importantly, it is NOT a "Learn Python in X days" type book. Those have their place, but this book targets those who actually are/want to be developers. Hence the subtitle "How to Think Like a Computer Scientist." ..."
"... Each chapter ends with debugging tips ..."
"... I think this makes for a great first Python book. To be followed by one that teaches the Python libraries. It teaches you how to think in Python. And how to be a developer; not just a coder. ..."

Paperback: 300 pages
Publisher: O'Reilly Media; 1 edition (August 23, 2012)
Language: English
ISBN-10: 144933072X
ISBN-13: 978-1449330729
Product Dimensions: 7 x 0.9 x 9.2 inches

Jeanne Boyarsky on September 8, 2012

development vs coding

"Think Python" is available online ([...]) which means you can decide if you like it first. Personally, I wanted to write in my copy making the paper copy a great thing. Inexpensive too for a computer book. It's one of those great books I know I'll refer to again. Can't imagine why you'd buy the Kindle version though.

The book is targetted at those learning Python. It's appropriate whether you are new to programming or coming from another language. And most importantly, it is NOT a "Learn Python in X days" type book. Those have their place, but this book targets those who actually are/want to be developers. Hence the subtitle "How to Think Like a Computer Scientist."

Each chapter ends with debugging tips, a glossary of terms and numerous exercises for practice. Common idioms are covered in addition to syntax, techniques and algorithms. Recursion is presented in a not scary, approachable way.

The author uses the term "state diagram" to refer to the state of variables in an object. I've never seen this usage before (being more used to the UML state diagram) and look forward to asking the author about it in his book promotion next month.

I think this makes for a great first Python book. To be followed by one that teaches the Python libraries. It teaches you how to think in Python. And how to be a developer; not just a coder.

Disclosure: I received a copy of this book from the publisher in exchange for writing this review.

[Oct 06, 2011] Text Processing in Python (a book)

A couple of you make donations each month (out of about a thousand of you reading the text each week). Tragedy of the commons and all that... but if some more of you would donate a few bucks, that would be great support of the author.

In a community spirit (and with permission of my publisher), I am making my book available to the Python community. Minor corrections can be made to later printings, and at the least errata noted on this website. Email me at <> .

A few caveats:

(1) This stuff is copyrighted by AW (except the code samples which are released to the public domain). Feel free to use this material personally; but no permission is given for further distribution beyond your personal use.

(2) The book is provided in "smart ASCII" format. This is converted to print (and maybe to fancier electronic formats) by automated scripts (txt->LaTeX->PDF for the printed version).

As a highly sophisticated "digital rights management" system, those scripts are not themselves made readily available. :-)

glossary.txt GLOSSARY TERMS

[Oct 06, 2011] Text Processing in Python (a book)

A couple of you make donations each month (out of about a thousand of you reading the text each week). Tragedy of the commons and all that... but if some more of you would donate a few bucks, that would be great support of the author.

In a community spirit (and with permission of my publisher), I am making my book available to the Python community. Minor corrections can be made to later printings, and at the least errata noted on this website. Email me at <> .

A few caveats:

(1) This stuff is copyrighted by AW (except the code samples which are released to the public domain). Feel free to use this material personally; but no permission is given for further distribution beyond your personal use.

(2) The book is provided in "smart ASCII" format. This is converted to print (and maybe to fancier electronic formats) by automated scripts (txt->LaTeX->PDF for the printed version).

As a highly sophisticated "digital rights management" system, those scripts are not themselves made readily available. :-)

glossary.txt GLOSSARY TERMS

[Apr 04, 2011] Scripting the Linux desktop, Part 1 Basics by Paul Ferrill

Jan 18, 2011 | developerWorks

Developing applications for the Linux desktop typically requires some type of graphical user interface (GUI) framework to build on. Options include GTK+ for the GNOME desktop and Qt for the K Desktop Environment (KDE). Both platforms offer everything a developer needs to build a GUI application, including libraries and layout tools to create the windows users see. This article shows you how to build desktop productivity applications based on the screenlets widget toolkit (see Resources for a link).

A number of existing applications would fit in the desktop productivity category, including GNOME Do and Tomboy. These applications typically allow users to interact with them directly from the desktop through either a special key combination or by dragging and dropping from another application such as Mozilla Firefox. Tomboy functions as a desktop note-taking tool that supports dropping text from other windows.

Getting started with screenlets

You need to install a few things to get started developing screenlets. First, install the screenlets package using either the Ubuntu Software Center or the command line. In the Ubuntu Software Center, type screenlets in the Search box. You should see two options for the main package and a separate installation for the documentation.

Python and Ubuntu

You program screenlets using Python. The basic installation of Ubuntu 10.04 has Python version 2.6 installed, as many utilities depend on it. You may need additional libraries depending on your application's requirements. For the purpose of this article, I installed and tested everything on Ubuntu version 10.04.

Next, download the test screenlet's source from the site. The test screenlet resides in the src/share/screenlets/Test folder and uses Cairo and GTK, which you also need to install. The entire source code for the test program is in the file. Open this file in your favorite editor to see the basic structure of a screenlet.

Python is highly object oriented and as such uses the class keyword to define an object. In this example, the class is named TestScreenlet and has a number of methods defined. In, note the following code at line 42:

def __init__(self, **keyword_args):
Python uses the leading and trailing double underscore (__) notation to identify system functions with predefined behaviors. In this case, the __init__ function is for all intents and purposes the constructor for the class and contains any number of initialization steps to be executed on the creation of a new instance of the object. By convention, the first argument of every class method is a reference to the current instance of the class and is named self. This behavior makes it easy to use self to reference methods and properties of the instance it is in:
self.theme_name = "default"
The screenlets framework defines several naming conventions and standards, as outlined on's developer's page (see Resources for a link). There's a link to the source code for the screenlets package along with the application programming interface (API) documentation. Looking at the code also gives you insight into what each function does with the calling arguments and what it returns.

Writing a simple screenlet

The basic components of a screenlet include an icon file, the source code file, and a themes folder. The themes folder contains additional folders for different themes. You'll find a sample template at with the required files and folders to help you get started.

For this first example, use the template provided to create a basic "Hello World" application. The code for this basic application is shown in Listing 1.

Listing 1. Python code for the Hello World screenlet
#!/usr/bin/env python

import screenlets

class HelloWorldScreenlet(screenlets.Screenlet):
    __name__ = 'HelloWorld'
    __version__ = '0.1'
    __author__ = 'John Doe'
    __desc__ = 'Simple Hello World Screenlet'
    def __init__(self, **kwargs):
        # Customize the width and height.
        screenlets.Screenlet.__init__(self, width=180, height=50, **kwargs)
    def on_draw(self, ctx):
        # Change the color to white and fill the screenlet.
        ctx.set_source_rgb(255, 255, 255)
        self.draw_rectangle(ctx, 0, 0, self.width, self.height)

        # Change the color to black and write the message.
        ctx.set_source_rgb(0, 0, 0)
        text = 'Hello World!'
        self.draw_text(ctx, text, 10, 10, "Sans 9" , 20, self.width)

if __name__ == "__main__":
    import screenlets.session

Each application must import the screenlets framework and create a new session. There are a few other minimal requirements, including any initialization steps along with a basic draw function to present the widget on screen. The example has an __init__ method that initializes the object. In this case, you see a single line with a call to the screenlet's __init__ method, which sets the initial width and height of the window to be created for this application.

The only other function you need for this application is the on_draw method. This routine sets the background color of the box to white and draws a rectangle with the dimensions defined earlier. It sets the text color to black and the source text to "Hello World!" and then draws the text. ...

Reusing code in a more complex screenlet

One nice thing about writing screenlets is the ability to reuse code from other applications. Code reuse opens a world of possibilities with the wide range of open source projects based on the Python language. Every screenlet has the same basic structure but with more methods defined to handle different behaviors. Listing 2 shows a sample application named TimeTrackerScreenlet.

Listing 2. Python code for the Time Tracker screenlet

#!/usr/bin/env python

import screenlets
import cairo
import datetime

class TimeTrackerScreenlet(screenlets.Screenlet):
	__name__ = 'TimeTrackerScreenlet'
	__version__ = '0.1'
	__author__ = 'John Doe'
	__desc__ = 'A basic time tracker screenlet.'
	theme_dir = 'themes/default'
	image = 'start.png'

	def __init__(self, **keyword_args):
		screenlets.Screenlet.__init__(self, width=250, height=50, **keyword_args)
		self.y = 25
		self.theme_name = 'default'
		self.on = False
		self.started = None

	def on_draw(self, ctx):
		self.draw_scaled_image(ctx, 0, 0, self.theme_dir + '/' + 
		self.image, self.width, self.height)
	def on_mouse_down(self, event):
		if self.on:
			self.started =
			self.image = 'stop.png'
			self.on = False
			if self.started:
				length = - self.started
				screenlets.show_message(None, '%s seconds' % 
				length.seconds, 'Time')
				self.started = None
			self.image = 'start.png'
			self.on = True

	def on_draw_shape(self, ctx):
		ctx.rectangle(0, 0, self.width, self.height)

if __name__ == "__main__":
	import screenlets.session
This example introduces a few more concepts that you need to understand before you start building anything useful. All screenlet applications have the ability to respond to specific user actions or events such as mouse clicks or drag-and-drop operations. In this example, the mouse down event is used as a trigger to change the state of your icon. When the screenlet runs, the start.png image is displayed. Clicking the image changes it to stop.png and records the time started in self.started. Clicking the stop image changes the image back to start.png and displays the amount of time elapsed since the first start image was clicked.

Responding to events is another key capability that makes it possible to build any number of different applications. Although this example only uses the mouse_down event, you can use the same approach for other events generated either by the screenlets framework or by a system event such as a timer. The second concept introduced here is persistent state. Because your application is running continuously, waiting for an event to trigger some action, it is able to keep track of items in memory, such as the time the start image was clicked. You could also save information to disk for later retrieval, if necessary.

Automating tasks with screenlets

Now that you have the general idea behind developing screenlets, let's put all together. Most users these days use a Really Simple Syndication (RSS) reader to read blogs and news feeds. For this last example, you're going to build a configurable screenlet that monitors specific feeds for keywords and displays any hits in a text box. The results will be clickable links to open the post in your default Web browser. Listing 3 shows the source code for the RSS Search screenlet.

Listing 3. Python code for the RSS Search screenlet
#!/usr/bin/env python

from screenlets.options import StringOption, IntOption, ListOption
import xml.dom.minidom
import webbrowser
import screenlets
import urllib2
import gobject
import pango
import cairo

class RSSSearchScreenlet(screenlets.Screenlet):
    __name__ = 'RSSSearch'
    __version__ = '0.1'
    __author__ = 'John Doe'
    __desc__ = 'An RSS search screenlet.'
    topic = 'Windows Phone 7'
    feeds = ['',
    interval = 10
    __items = []
    __mousesel = 0
    __selected = None
    def __init__(self, **kwargs):
        # Customize the width and height.
        screenlets.Screenlet.__init__(self, width=250, height=300, **kwargs)
        self.y = 25
    def on_init(self):
        # Add options.
        self.add_options_group('Search Options',
                               'RSS feeds to search and topic to search for.')
        self.add_option(StringOption('Search Options',
            'Topic to search feeds for.'))
        self.add_option(ListOption('Search Options',
                                   'RSS Feeds',
                                   'A list of feeds to search for a topic.'))
        self.add_option(IntOption('Search Options',
                                  'Update Interval',
                                  'How frequently to update (in seconds)'))

    def update(self):
        """Search selected feeds and update results."""
        self.__items = []

        # Go through each feed.
        for feed_url in self.feeds:
            # Load the raw feed and find all item elements.
            raw = urllib2.urlopen(feed_url).read()
            dom = xml.dom.minidom.parseString(raw)
            items = dom.getElementsByTagName('item')
            for item in items:
                # Find the title and make sure it matches the topic.
                title = item.getElementsByTagName('title')[0]
                if self.topic.lower() not in title.lower(): continue
                # Shorten the title to 30 characters.
                if len(title) > 30: title = title[:27]+'...'
                # Find the link and save the item.
                link = item.getElementsByTagName('link')[0]
                self.__items.append((title, link))


        # Set to update again after self.interval.
        self.__timeout = gobject.timeout_add(self.interval * 1000, self.update)
    def on_draw(self, ctx):
        """Called every time the screenlet is drawn to the screen."""
        # Draw the background (a gradient).
        gradient = cairo.LinearGradient(0, self.height * 2, 0, 0)
        gradient.add_color_stop_rgba(1, 1, 1, 1, 1)
        gradient.add_color_stop_rgba(0.7, 1, 1, 1, 0.75)
        self.draw_rectangle_advanced (ctx, 0, 0, self.width - 20,
                                      self.height - 20,
                                      rounded_angles=(5, 5, 5, 5),
                                      fill=True, border_size=1,
                                      border_color=(0, 0, 0, 0.25),
                                      shadow_color=(0, 0, 0, 0.25))
        # Make sure we have a pango layout initialized and updated.
        if self.p_layout == None :
            self.p_layout = ctx.create_layout()
        # Configure fonts.
        p_fdesc = pango.FontDescription()
        p_fdesc.set_family("Free Sans")
        p_fdesc.set_size(10 * pango.SCALE)

        # Display our text.
        pos = [20, 20]
        ctx.set_source_rgb(0, 0, 0)
        x = 0
        self.__selected = None
        for item in self.__items:
            # Find if the current item is under the mouse.
            if self.__mousesel == x and self.mouse_is_over:
                ctx.set_source_rgb(0, 0, 0.5)
                self.__selected = item[1]
                ctx.set_source_rgb(0, 0, 0)
            self.p_layout.set_markup('%s' % item[0])
            pos[1] += 20
            x += 1

    def on_draw_shape(self, ctx):
        ctx.rectangle(0, 0, self.width, self.height)
    def on_mouse_move(self, event):
        """Called whenever the mouse moves over the screenlet."""
        x = event.x / self.scale
        y = event.y / self.scale
        self.__mousesel = int((y -10 )/ (20)) -1
    def on_mouse_down(self, event):
        """Called when the mouse is clicked."""
        if self.__selected and self.mouse_is_over:

if __name__ == "__main__":
    import screenlets.session

Building on the concepts of the first two examples, this screenlet uses a number of new concepts, including the config page. In the on_init routine, three options are added for the user to specify: a list of RSS feeds to track, a topic of interest to search for, and an update interval. The update routine then uses all of these when it runs.

Python is a great language for this type of task. The standard library includes everything you need to load the Extensible Markup Language (XML) from an RSS feed into a searchable list. In Python, this takes just three lines of code:

raw = urllib2.urlopen(feed_url).read()
dom = xml.dom.minidom.parseString(raw)
items = dom.getElementsByTagName('item')
The libraries used in these three lines include urllib2 and xml. In the first line, the entire contents found at the feed_url address are read into the string raw. Next, because you know that this string contains XML, you use the Python XML library dom.minidom.parseString method to create a document object made up of node objects.

Finally, you create a list of element objects corresponding to the individual XML elements named item. You can then iterate over this list to search for your target topic. Python has a very elegant way of iterating over a list of items using the for keyword, as in this code snippet:

for item in items:
    # Find the title and make sure it matches the topic.
    title = item.getElementsByTagName('title')[0]
    if self.topic.lower() not in title.lower(): continue

Each item matching your criteria is added to the currently displayed list, which is associated with this instance of the screenlet. Using this approach makes it possible to have multiple instances of the same screenlet running, each configured to search for different topics. The final part of the update function redraws the text with the updated list and fires off a new update timer based on the interval on the config page. By default, the timer fires every 10 seconds, although you could change that to anything you want. The timer mechanism comes from the gobject library, which is a part of the GTK framework.

This application expands the on_draw method quite heavily to accommodate your new functionality. Both the Cairo and Pango libraries make it possible to create some of the effects used in the text window. Using a gradient gives the background of the widget a nice look along with rounded angles and semi-transparency. Using Pango for layout adds a number of functions for saving and restoring the current context easily. It also provides a way to generate scalable fonts based on the current size of the screenlet.

The trickiest part in the on_draw method is handling when a user hovers over an item in the list. Using the for" keyword, you iterate over the items in the screenlet to see whether the user is hovering over that particular item. If so, you set the selected property and change the color to provide visual feedback. You also use a bit of markup to set the link property to bold-probably not the most elegant or efficient way to deal with the problem, but it works. When a user clicks one of the links in the box, a Web browser is launched with the target URL. You can see this functionality in the on_mouse_down function. Python and its libraries make it possible to launch the default web browser to display the desired page with a single line of code. Figure 2 shows an example of this screenlet.

[Jan 10, 2011] Programming Python by Mark Lutz

This is a typical Mark Lutz book. Very pedantic. Not very insightful... More reference then a textbook...
Notable quotes:
"... The 2010 publication date is a paradox, because this book only covers the new Python v3, which is a major split from Python 2. ..."
"... So again, this book is a poor fit. No matter how you slice it. (rim shot) ..."

Paperback: 1632 pages
Publisher: O'Reilly Media; Fourth Edition edition (January 10, 2011)
Language: English
ISBN-10: 0596158106
ISBN-13: 978-0596158101
Product Dimensions: 7 x 3 x 9.2 inches

All the books by this guy are written for marketing, selling, not for teaching. If you look at his website you can easily see that he is a good marketer.

Language is very bad and obscure. A lot of useless jargon. A lot of non-sense repetitions. A lot of non-sense examples. I am 100 percent sure that these stellar reviews here are not written by readers. I am very sure that these reviews are either written by the author, or by the marketing department!

Before buying this book, just read one chapter or section and you will understand what I mean.

My alternative suggestion is: Read the official tutorial on the python site. It is much better and sufficient for most purposes. And it is free. If you need a reference book, well, just type the function name or the subject into

If you are very willing to pay money, then buy a serious book, written by a well educated computer scientist, or whatever. Do not buy books written by simple-minded practitioners.

Antonio (New Zealand) - See all my reviews

3.0 out of 5 stars Reasonable source of information but some aspects I didn't like, September 21, 2011

I've programmed in Python before, but haven't used it for a couple of years. I was looking for a refresher, as well as some example applications.

Firstly note that this book isn't an introduction to Python, nor is it a reference. The author makes that clear in the preface, instead referring you to the other titles he has written. Also the book covers Python 3.x. Perhaps those who are interested in earlier versions should get the previous edition of the book. On the other hand while there are some changes between the two versions, reading the book wouldn't be a waste of time if you are interested in Python 2.x

I liked this book in the sense that if I looked up a particular topic, I often found his discussion reasonable and could get some useful idiomatic python code to use.

On the other hand, the author intends this book as a tutorial. When I tried to read through it as a tutorial I just found it falling a bit flat. Also at around 1600 pages I doubt I would have the endurance to read through it from beginning to end.

I guess the main problem with the book is that you are interested in one particular area to use Python, say web development, or interfacing with databases this book would probably have insufficient detail, and you would want a specialist book in that area. Also I found the authors writing style somewhat verbose. Another issue is that those people who want to build a GUI for instance may not be interested in his choice of tool Tkinter.

In conclusion, this book does have some useful information, I didn't really like it. While it is hard to pin down the reasons for my dislike, I guess it is because he tries to cover so many topics, that not all of them are covered that well. Also it is not always clear who the audience is, beginners may find his explanations to terse, whereas those who have some familiarity with python may wonder why he is pointing out the obvious. I recommend people who are looking to develop a particular application in python, instead get a book more focused on their area of interest. Those who are new to python should avoid this book also. Those who are looking for a python 3.x refresher should find a book that's a little less weighty.

It's kind of annoying all those people who have received a free book from O'reilly giving it a five star review. Although they disclosed it, it now makes me suspicious as to how many other five star reviews are given by people who enjoy getting free books, and haven't disclosed the fact.

Luciano Ramalho "stand-up programmer" (Sao Paulo, SP Brazil) - See all my reviews

2.0 out of 5 stars Rambling, poorly edited, not accessible for beginners and too slow for everyone else, December 18, 2011

I own more than 100 O'Reilly books and dozens of Python books, since I've been a Python user, instructor and evangelist for 13 years now. The first edition of this book was the first book published about Python by O'Reilly, and it was often compared to Programming Perl at the time. The comparison was very bad for this book: it is much longer, yet shallower than the Camel Book; it tries hard to be funny, but Larry Wall's jokes are less frequent but more effective; it is poorly edited, while the Camel Book is a gem and a true classic.

The pace is excruciatingly slow for a seasoned programmer of any language, but in spite of long and repetitive explanations this book is not accessible to beginners because of excessive, needless jargon and attention to irrelevant details when first introducing language features, making the narrative hard to digest.

It is accurate and up to date, and for this reason I give it 2 stars instead of one. But anyone looking for a Python book will be better served looking elsewhere. From O'Reilly, Alex Martelli's Python in a Nutshell is the best there is to really understand how the language works and how it should be used, even if it is outdated. Python Essential Reference by David Beazley is excellent too, and the 4th edition is very up to date. The Dive into Python books (Python 2 and Python 3 versions exist) are also excellent, and free as in speech. The Quick Python Book by Manning is also good. In fact, every other Python book that I know is a better buy than this one, which probably sells mainly due to the O'Reilly brand and because it was the first. BTW, Martelli, on p. 12 of Python in a Nutshell, 2e, refers readers to nine other books by O'Reilly and other publishers, including two others by Mark Lutz. This is one is not among the recommendations. I think I know why.

DONALD R HUMPHREYS - See all my reviews

2.0 out of 5 stars Wordy, a bit pendantic and less useful than I had hoped, February 23, 2012

I agree with Antonio's review when he said, "I guess the main problem with the book is that you are interested in one particular area to use Python, say web development, or interfacing with databases this book would probably have insufficient detail, and you would want a specialist book in that area. Also I found the authors writing style somewhat verbose. Another issue is that those people who want to build a GUI for instance may not be interested in his choice of tool Tkinter."

From my perspective, this is another book that is way too wordy and one that seems to be an example of why programmers should probably not write texts that are meant to be tutorials.

My favorite author is Larry Ullman and after reading several of his books (about PHP/Mysql), I am finding that there is a lack of well-written books about Python in general in that they don't meet the standard of a text that help you learn a language and then put it to use with very little fuss or detours into arcane matters.

As the other reviewer noted, why does the author place an emphasis on teaching Tkinter which in my view is dated? It seems to be because as an author who is true to the 'Python way,' he defaults to teaching things that are core to the standard library/distribution of the language. I would think it would make sense to spend more time or even equal time on explaining a visual tool such as PyQT that is more friendly, state of the art and that allows for greater productivity.

Also, after investing in this book and a few other Python books because my interests includes GUI programming in general and with Python, I'm learning that these authors are not doing a good job of explaining the pros and cons of what's involved in distributing Python GUI Programs. Evidently, according to many forum entries, attempting to create and then distribute stand-alone Python Gui-based programs is a big deal compared with other options.

I still think the Python Language has the potential to be helpful to me - perhaps in console mode - but the Python books that I've read so far - and I chose authoritative sources - seem to be directed less to practical-minded users and more towards those who have the time for delving into things without asking "what is this useful for."

I have learned that for GUI program development, I will probably be better off using a VS product such as C# for GUI's, Perl for text file processing/editing and some CGI work and PHP for web site applications.

Paul A. Caskey on March 31, 2014

This book weighs FIVE POUNDS.

This is, by far, the biggest O'Reilly book I have ever seen. Maybe there is some Java book that matches it; I don't know. This one weighs 4 lbs 14 oz, and is almost 3 inches thick. Here is what you should do, if you buy this book:

1. Get a hacksaw and cut through the binding at page 355. Now you have a 3/4" thick book, from the front, containing a deep "introduction" to Python. This nice little rambling tutorial will be too confusing for a beginner, incomplete enough to be worthless as a reference, but very good if you are a PhD Computer Scientist interested in theoretical Object Oriented design, Python Internals, and a particularly confusing dive into python data structures. And parsing Windows directory trees. Read this little book once, and then chuck it into your nearest recycling bin.

2. Make your next hacksaw cut through the binding at page 768. This, oddly enough, produces another 3/4" thick book. Seal the binding with electrical tape. Label this book "Python/Tk GUI Programming" and stick in on your book shelf to collect dust. Reach for it some Sunday you are feeling nostalgic for the days when anyone cared about raw Windows or Linux GUI interfaces, instead of web interfaces.

3. What you have left is a hefty 830-page (!) O'Reilly book on Programming Python. This is the second half of the original book. This will now be on par with the other O'Reilly standards on Java or Perl already on your bookshelf -- measured by pure dead tree weight. This trimmed-down volume is a nice tome on Python client/server programming, Internet protocols, threads, textual data parsing theory and examples, database connections, and still some more Tk GUI stuff (the author can't seem to resist).

The 2010 publication date is a paradox, because this book only covers the new Python v3, which is a major split from Python 2. But every desktop and server in my work environment has Python 2.6 or 2.7 installed, so that's what I'm using. As a professional needing to come up to speed on Python, I need a clean examination of both Python 2 and 3. Certainly there is room for that in a 1600-page book, right? Apparently not. Plus, as a V3 reference, there are gaps in this book because it was published before Python 3 was fully baked.

So again, this book is a poor fit. No matter how you slice it. (rim shot)

Donate this book to a library, school, or sell it at a used book store. Whatever you do, don't pay to ship this beast back to Amazon. This shipping cost will kill you. Get ready for jaw drops from the guys at your local monthly programming group. If nothing else, this book is good --- for some laughs.

[Dec 25, 2010] Linux Developers choose Python as Best Programming Language and ...

Such polls mainly reflect what industry is using, no so much the quality of the language. In other poll the best Linux distribution is Ubuntu, which is probably the most primitive among the major distributions available.
According to Linux Journal readers, Python is both the best programming language and the best scripting language out there. This year, more than 12,000 developers on weighed in on what tools are helping them work and play as part of the Linux Journal's 2010 Readers' Choice Award - and it came as no surprise to those of us at ActiveState that Python came out on top as both the Best Scripting Language (beating out PHP, bash, PERL and Ruby) - and for the second straight year, Python also won as the Best Programming Language, once again edging out C++, Java, C and Perl for the honors.

At ActiveState, we continue see a steady stream of ActivePython Community Edition downloads, more enterprise deployments of ActivePython Business Edition, and a steady increase in the number of enterprise-ready Python packages in our PyPM Index that are being used by our customers over a wide range of verticals including high-tech, financial services, healthcare, and aerospace companies. Python has matured into an enterprise-class programming language that continues to nuture it's scripting world roots. We're happy to see Python get the recognition that it so justly deserves!

[Dec 25, 2010] Russ's Notes on Python

Those notes span six years. It in six year a person did not switched he/she will never switch...
"I'm not entirely sure why Python has never caught on with me as a language to use on a regular basis. Certainly, one of the things that always bugs me is the lack of good integrated documentation support like POD (although apparently reStructured Text is slowly becoming that), but that's not the whole story. I suspect a lot is just that I'm very familiar with Perl and with its standard and supporting library, and it takes me longer to do anything in Python. But the language just feels slightly more awkward, and I never have gotten comfortable with the way that it uses exceptions for all error reporting. "

On the subject of C program indentation: In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt.

- Blair P. Houghton


Around the beginning of April, 2001, I finally decided to do something about the feeling I'd had for some time that I'd like to learn a few new programming languages. I started by looking at Python. These are my notes on the process.

Non-religious comments are welcome. Please don't send me advocacy.

I chose Python as a language to try (over a few other choices like Objective Caml or Common Lisp) mostly because it's less of a departure from the languages that I'm already comfortable with. In particular, it's really quite a bit like Perl. I picked this time to start since I had an idea for an initial program to try writing in Python, a program that I probably would normally write in Perl. I needed a program to help me manage releases of the various software package that I maintain, something to put a new version on an ftp site, update a series of web pages, generate a change log in a nice form for the web, and a few other similar things.

I started by reading the Python tutorial off, pretty much straight through. I did keep an interactive Python process running while I did, but I didn't type in many of the examples; the results were explained quite well in the tutorial, and I generally don't need to do things myself to understand them. The tutorial is exceptionally well-written; after finishing reading it straight through (which took me an evening) and skimming the library reference, I felt I had a pretty good grasp on the language.

Things that immediately jumped out at me that I liked a lot:

There were a few things that I immediately didn't like, after having just read the tutorial:

There were also a couple of things that I immediately missed from other languages:


Over the next few days, I started reading the language manual straight through, as well as poking around more parts of the language reference and writing some code. I started with a function to find the RCS keywords and version string in a file and from that extract the version and the last modified date (things that would need to be modified on the web page for that program). I really had a lot of fun with this.

The Python standard documentation is excellent. I mean truly superb. I can't really compare it to Perl (the other language that has truly excellent standard documentation), since I know Perl so well that I can't evaluate its documentation from the perspective of the beginner, but Python's tutorial eased me into the language beautifully and the language manual is well-written, understandable, and enjoyable to read.

The library reference is well-organized and internally consistent, and I never had much trouble finding things. And they're available in info format as well as web pages, which is a major advantage for me; info is easier for me to read straight through, and web pages are easier for me to browse.

The language proved rather fun to write. Regex handling is a bit clunky since it's not a language built-in, but I was expecting that and I don't really mind it. The syntax is fun, and XEmacs python-mode does an excellent job handling highlighting and indentation. I was able to put together that little function and wrap a test around it fairly quickly (in a couple of hours while on the train, taking a lot of breaks to absorb the language reference manual or poke around in the library reference for the best way of doing something).

That's where I am at the moment. More as I find time to do more....


I've finished my first Python program, after having gotten distracted by a variety of other things. It wasn't the program I originally started writing, since the problem of releasing a new version of a software package ended up being more complicated than I expected. (In particular, generating the documentation looks like it's going to be tricky.) I did get the code to extract version numbers and dates written, though, and then for another project (automatically generating man pages from scripts with embedded POD when installing them into our site-wide software installation) I needed that same code. So I wrote that program in Python and tested it and it works fine.

The lack of a way to safely execute a program without going through the shell is really bothering me. It was also the source of one of the three bugs in the first pass at my first program; I passed a multiword string to pod2man and forgot to protect it from the shell. What I'm currently doing is still fragile in the presence of single quotes in the string, which is another reason why I much prefer Perl's safe system() function. I feel like I must be missing something; something that fundamental couldn't possibly fail to be present in a scripting language.

A second bug in that program highlights another significant difference from Perl that I'm finding a little strange to deal with, namely the lack of equivalence between numbers and strings. My program had a dictionary of section titles, keyed by the section numbers, and I was using the plain number as the dictionary key. When I tried to look up a title in the dictionary, however, I used as the key a string taken from the end of the output filename, and 1 didn't match "1". It took me a while to track that down. (Admittedly, the problem was really laziness on my part; given the existence of such section numbers as "1m" and "3f", I should have used strings as the dictionary keys in the first place.)

The third bug, for the record, was attempting to use a Perl-like construct to read a file (while line = file.readline():). I see that Python 2.1 has the solution I really want in the form of xreadlines, but in the meantime that was easy enough to recode into a test and a break in the middle of the loop.

The lack of a standard documentation format like Perl's POD is bothering me and I'm not sure what to do about it. I want to put the documentation (preferrably in POD, but I'm willing to learn something else that's reasonably simple) into the same file as the script so that it gets updated when the script does and doesn't get lost in the directory. This apparently is just an unsolved problem, unless I'm missing some great link to an embedded documentation technique (and I quite possibly am). Current best idea is to put a long triple-quoted string at the end of my script containing POD. Ugh.

I took a brief look at the standard getopt library (although I didn't end up using it), and was a little disappointed; one of the features that I really liked about Perl's Getopt::Long was its ability to just stuff either the arguments to options or boolean values into variables directly, without needing something like the long case statement that's a standard feature of main() in many C programs. Looks like Python's getopt is much closer to C's, and requires something quite a bit like that case statement.

Oh, and while the documentation is still excellent, I've started noticing a gap in it when it comes to the core language (not the standard library; the documentation there is great). The language reference manual is an excellent reference manual, complete with clear syntax descriptions, but is a little much if one just wants to figure out how to do something. I wasn't sure of the syntax of the while statement, and the language reference was a little heavier than was helpful. I find myself returning to the tutorial to find things like this, and it has about the right level of explanation, but the problem with that is that the tutorial is laid out as a tutorial and isn't as easy to use as a reference. (For example, the while statement isn't listed in the table of contents, because it was introduced in an earlier section with a more general title.)

I need to get the info pages installed on my desktop machine so that I can look things up in the index easily; right now, I'm still using the documentation on the web.


I've unfortunately not had very much time to work on this, as one can tell from the date.

Aahz pointed out a way to execute a program without going through the shell, namely os.spawnv(). That works, although the documentation is extremely poor. (Even in Python 2.1, it refers me to the Visual C++ Runtime Library documentation for information on what spawnv does, which is of course absurd.) At least the magic constants that it needs are relatively intuitive. Unfortunately, spawnv doesn't search the user's PATH for a command, and there's nothing like spawnvp. Sigh.

There's really no excuse for this being quite this hard. Executing a command without going through the shell is an extremely basic function that should be easily available in any scripting language without jumping through these sorts of hoops.

But this at least gave me a bit of experience in writing some more Python (a function to search the PATH to find a command), and the syntax is still very nice and convenient. I'm bouncing all over the tutorial and library reference to remember how to do things, but usually my first guesses are right.

I see that Debian doesn't have the info pages, only the HTML documentation. That's rather annoying, but workable. I now have the HTML documentation for Python 2.1 on local disk on my laptop.


I've now written a couple of real Python programs (in addition to the simple little thing to generate man pages by running pod2man). You can find them (cvs2xhtml and cl2xhtml) with my web tools. They're not particularly pretty, but they work, and I now have some more experience writing simple procedural Python code. I still haven't done anything interesting with objects. Comments on the code are welcome. Don't expect too much.

There are a few other documentation methods for Python, but they seem primarily aimed at documenting modules and objects rather than documenting scripts. Pydoc in particular looks like it would be nice for API documentation but doesn't really do anything for end-user program documentation. Accordingly, I've given up for the time being on finding a more "native" approach and am just documenting my Python programs the way that I document most things, by writing embedded POD. I've yet to find a better documentation method; everything else seems to either be far too complicated and author-unfriendly to really write directly in (like DocBook) or can't generate Unix man pages, which I consider to be a requirement.

The Python documentation remains excellent, if scattered. I've sometimes spent a lot of time searching through the documentation to find the right module to do something, and questions of basic syntax are fairly hard to resolve (the tutorial is readable but not organized as a reference, and the language reference is too dense to provide a quick answer).


My first major Python application is complete and working (although I'm not yet using it as much as I want to be using it). That's Tasker, a web-based to-do list manager written as a Python CGI script that calls a Python module.

I've now dealt with the Python module building tools, which are quite nice (nicer in some ways than Perl's Makefile.PL system with some more built-in functionality, although less mature in a few ways). Python's handling of the local module library is clearly less mature than Perl, and Debian's Python packages don't handle locally installed modules nearly as well as they should, but overall it was a rather positive experience. Built-in support for generating RPMs is very interesting, since eventually I'd like to provide .deb and RPM packages for all my software.

I played with some OO design for this application and ended up being fairly happy with how Python handled things. I'm not very happy with my object layout, but that's my problem, not Python's. The object system definitely feels far smoother and more comfortable to me than Perl's, although I can still write OO code faster in Perl because I'm more familiar with it. There's none of the $self hash nonsense for instance variables, though, which is quite nice.

The CGI modules for Python, and in particular the cgitb module for displaying exceptions nicely in the browser while debugging CGI applications, are absolutely excellent. I was highly impressed, and other than some confusion about the best way to retrieve POST data that was resolved after reading the documentation more closely, I found those modules very easy to work. The cgitb module is a beautiful, beautiful thing and by itself makes me want to use Python for all future CGI programming.

I still get caught all the time by the lack of interchangability of strings and numbers and I feel like I'm casting things all the time. I appreciate some of the benefits of stronger typing, but this one seems to get in my way more often than it helps.

I'm also still really annoyed at the lack of good documentation for the parts of the language that aren't considered part of the library. If I want documentation on how print works, I have only the tutorial and the detailed language standard, the former of which is not organized for reference and the latter of which is far too hard to understand. This is a gaping hole in the documentation that I really wish someone would fix. Thankfully, it only affects a small handful of things, like control flow constructs and the print statement, so I don't hit this very often, but whenever I do it's extremely frustrating.

I've given up on documentation for scripts and am just including a large POD section at the end of the script, since this seems to be the only option that will generate good man pages and good web pages. I'm not sure what to do about documentation for the module; there seem to be a variety of different proposals but nothing that I can really just use.

Oh, and one last point on documentation: the distutils documentation needs some work. Thankfully I found some really good additional documentation on the PyPI web site that explained a lot more about how to write a script.


Six years later, I still find Python an interesting language, but I never got sufficiently absorbed by it for it to be part of my standard toolkit.

I've subsequently gotten some additional experience with extending Python through incorporating an extension written by Thomas Kula into the remctl distribution. The C interface is relatively nice and more comfortable than Perl, particularly since it doesn't involve a pseudo-C that is run through a preprocessor. It's a bit more comfortable to read and write.

Python's installation facilities, on the other hand, are poor. The distutils equivalent of Perl's ExtUtils::MakeMaker is considerably worse, despite ExtUtils::MakeMaker being old and crufty and strange. (I haven't compared it with Module::Build.) The interface is vaguely similar, but I had to apply all sorts of hacks to get the Python extension to build properly inside a Debian packaging framework, and integrating it with a larger package requires doing Autoconf substitution on a ton of different files. It was somewhat easier to avoid embedding RPATH into the module, but I'd still much rather work with Perl's facilities.

Similarly, while the test suite code has some interesting features (I'm using the core unittest framework), it's clearly inferior to Perl's Test::More support library and TAP protocol. I'm, of course, a known fan of Perl's TAP testing protocol (I even wrote my own implementation in C), but that's because it's well-designed, full-featured, and very useful. The Python unittest framework, by comparison, is awkward to use, has significantly inferior reporting capabilities, makes it harder to understand what test failed and isolate the failure, and requires a lot of digging around to understand how it works. I do like the use of decorators to handle skipping tests, and there are some interesting OO ideas around test setup and teardown, but the whole thing is more awkward than it should be.

I'm not entirely sure why Python has never caught on with me as a language to use on a regular basis. Certainly, one of the things that always bugs me is the lack of good integrated documentation support like POD (although apparently reStructured Text is slowly becoming that), but that's not the whole story. I suspect a lot is just that I'm very familiar with Perl and with its standard and supporting library, and it takes me longer to do anything in Python. But the language just feels slightly more awkward, and I never have gotten comfortable with the way that it uses exceptions for all error reporting.

I may get lured back into it again at some point, though, since Python 3.0 seems to have some very interesting features and it remains popular with people who know lots of programming languages. I want to give it another serious look with a few more test projects at some point in the future.

[Oct 09, 2010] How to Think Like a (Python) Programmer by Allen B. Downey

free e-book

Version 0.9.2

[Jan 15, 2010] The Quick Python Book, Second Edition by Vern Ceder

Paperback: 400 pages
Publisher: Manning Publications; 2nd edition (January 15, 2010)
Language: English
ISBN-10: 193518220X
ISBN-13: 978-1935182207
Product Dimensions: 7.4 x 0.7 x 9.2 inches

Alexandros Gezerlis "Alex Gezerlis" (Seattle, WA)(REAL NAME) - See all my reviews

Probably the best book on Python 3 currently available, July 10, 2010

"The Quick Python Book, Second Edition" is Vernon Ceder's reworking of the well-received volume "The Quick Python Book" by Daryl Harms and Kenneth McDonald. Ceder has removed a number of specialized chapters on COM, C & C++ extensions, JPython, HTMLgen & Zope and, more important, he has brought the text completely up to date, covering Python 3.1.

Most Python texts out there describe Python 2.x, so this book's main competition is: a) Mark Summerfield's "Programming in Python 3: A complete introduction to the Python Language, Second Edition", and b) Mark Pilgrim's "Dive into Python 3", while two other major books have incorporated material on Python 3, namely c) James Payne's "Beginning Python: Using Python 2.6 and Python 3.1" and d) Mark Lutz's "Learning Python: Powerful Object-Oriented Programming, 4th Edition".

The Good: this book is nice and short. It assumes a certain level of competence/background, so it does not waste space introducing the language-independent basics of flow control, object orientation, exception handling, and so on. It is example-based, and unlike in Pilgrim's volume the first few examples are short and thus readable. Chapter 3 ("The Quick Python overview") can be used as a compact reference when you're done reading the book, and various tables throughout the book help it function as a reference. Unlike its competition, it doesn't spend chapter upon chapter on databases, networking, or web applications. Instead, such topics are covered in only one (short) chapter at the end of the book. Ceder offers useful advice on the interrelation between older and newer Python features, whether discussing how to be more idiomatic (e.g. in chapter 6 on the format method vs % formatting, and in chapter 14 when introducing the "with" statement) or how to migrate from Python 2 to Python 3 (he devotes chapter 22 to this topic). On the publisher's website you can find a list of errata as well as the complete source code for the book. There you will see a link to an "Author online" forum in which you can interact with Ceder; perhaps more important, everyone who buys a paper copy of the book may also download a free PDF version. It is to be hoped that other publishers will follow Manning's example.

The Bad: the author is very clear that the book is aimed at those with experience in another programming language. Even so, in a few cases the assumptions are Python-specific (and hence unwarranted): one example is in chapter 5, where he lets us know that if x is a list then y=x[:] makes a copy of x, though this does not really explain why we cannot simply say y=x to accomplish the same goal. Another example: in chapter 12 Ceder uses character ranges expressed with [], though these are introduced much later (in chapter 17). Similarly, chapter 3 is quite good if you've already come into contact with Python before (even fleetingly). If you haven't, it may be obfuscating (though you could always just skip it on the first read). On a different note, this book does not contain exercises, though Summerfield's, Payne's, and Lutz's volumes do (along with answers). As mentioned in the previous paragraph, Ceder does not include too much extraneous stuff something which in my opinion is definitely a plus. However, he does not say absolutely anything on threading while Summerfield has a chapter on the subject and Payne a section. Similarly, Ceder does not mention function annotations at all, while Summerfield and Lutz each have a section on them. Finally, Ceder keeps referring the reader to the Python documentation for more details, and this can get frustrating. On the other hand, I suppose it would have been impossible for the book to stay at its current 320 pages otherwise.

Ceder's writing is concise, but this does not imply that he covers only the bare minimum of material. To pick a relatively advanced topic as an example, Ceder spends 2 pages on metaclasses, Summerfield 4.5 pages, Pilgrim and Payne devote half a page each only in the context of the changes from Python 2 to 3, while Lutz, in keeping with the mammoth size of his book, spends more than 30 pages on the topic. This (arbitrarily chosen) example is in some ways indicative of the wider approaches taken by the various Python 3 book authors.

In a nutshell, the fact that this book is considerably shorter than its competitors does not mean that it is shallow. The compactness is due partly to the author's succinct style of writing (which is not opaque, however) and partly to the fact that it does not contain too much on database programming, web services, and so on. All in all, if you're looking for a solid book on Python 3 that you stand a reasonable chance of reading cover-to-cover, then this is the volume you should buy. Four and a half stars.

Alex Gezerlis

[Dec 25, 2009] Bioinformatics Programming Using Python Practical Programming for Biological Data by Mitchell L. Model

Notable quotes:
"... Comparing to Perl, Python has a quite lagged adoption as the scripting language of choice in the field of bioinformatics, although it is getting some moment recently. If you read job descriptions for bioinformatics engineer or scientist positions a few year back, you barely saw Python mentioned, even as "nice to have optional skill". ..."
"... Moreover, it can actually serve as a good introductory book to Python regardless the main focus on bioinformatics examples. ..."

Paperback: 528 pages
Publisher: O'Reilly Media; 1st edition (December 25, 2009)
Language: English
ISBN-10: 059615450X
ISBN-13: 978-0596154509
Product Dimensions: 7 x 1 x 9.2 inches

C. Chin on February 15, 2010

Good introductory book for learning both bioinformatics and python

Comparing to Perl, Python has a quite lagged adoption as the scripting language of choice in the field of bioinformatics, although it is getting some moment recently. If you read job descriptions for bioinformatics engineer or scientist positions a few year back, you barely saw Python mentioned, even as "nice to have optional skill". One of the reasons is probably lacking of good introductory level bioinformatics books in Python so there are, in general, less people thinking Python as a good choice for bioinformatics. The book "Beginning Perl for Bioinformatics" from O Reilly was published in 2001. Almost one decade later, we finally get the book "Bioinformatics Programming Using Python" from Mitchell Model to fill the gap.

When I first skimmed the book "Bioinformatics Programming Using Python", I got the impression that this book was more like "learning python using bioinformatics as examples" and felt a little bit disappointed as I was hoping for more advanced content. However, once I went through the book, reading the preface and everything else chapter by chapter, I understood the main target audiences that author had in mind and I thought the author did a great job in fulfilling the main purpose.

In modern biological research, scientists can easily generate large amount of data where Excel spreadsheets that most bench scientists use to process limiting amount of data is no longer an option. I personally believe that the new generation of biologists will have to learn how to process and manage large amount inhomogeneous data to make new discovery out of it. This requires general computational skill beyond just knowing how to use some special purpose applications that some software vendor can provide. The book gives good introduction about practical computational skills using Python to process bioinformatics data. The book is very well organized for a newbie who just wants to start to process the raw data their own and get into a process of learning-by-doing to become a Python programmer.

The book starts with an introduction on the primitive data types in Python and moves toward the flow controls and collection data type with emphasis on, not surprisingly, string processing and file parsing, two of most common tasks in bioinformatics. Then, the author introduces the object-oriented programming in Python. I think a beginner will also like those code templates for different patterns of data processing task in Chapter 4. They summarize the usual flow structure for common tasks very well.

After giving the basic concept of programming with Python, the author focuses on other utilities which are very useful for day-to-day work for gathering, extracting, and processing data from different data sources. For example, the author discusses about how to explore and organize files with Python in the OS level, using regular expression for extracting complicated text data file, XML processing, web programming for fetching online biological data and sharing data with a simple web server, and, of course, how to program Python to interact with a database. The deep knowledge of all of these topics might deserve their own books. The author does a good job to cover all these topics in a concise way. This will help people to know what can be done very easily with Python and, if they want, to learn any of those topic more from other resources. The final touch of the book is on structured graphics. This is very wise choice since the destiny of most of bioinformatics data is very likely to be some graphs used in presentations and for publishing. Again, there are many other Python packages can help scientists to generate nice graph, but the author focuses on one or two of them to show the readers how to do general some graphs with them and the reader might be able to learn something else from there.

One thing I hope the author can also cover, at least at a beginner level, is the numerical and statistical aspect in bioinformatics computing with Python. For example, Numpy or Scipy are very useful for processing large amount of data, generating statistics and evaluating significance of the results. They are very useful especially for processing large amount data where the native Python objects are no longer efficient enough. The numerical computation aspect in bioinformatics is basically lacking in the book. The other thing that might be desirable for such a book is to show that Python is a great tool for prototyping some algorithms in bioinformatics. This is probably my own personal bias, but I do think it is nice to show some basic bioinformatics algorithm implementations in python. This will help the readers to understand a little bit more about some of the common algorithms used in the field and to get a taste on a little bit more advanced programming.

Overall, I will not hesitate to recommend this book to any one who will like to start to process biological data on their own with Python. Moreover, it can actually serve as a good introductory book to Python regardless the main focus on bioinformatics examples. The book covers most day-to-day basic bioinformatics tasks and shows Python is a great tool for those tasks.

I think a little more advanced topics, especially on basic numerical and statistical computation in the book, will also help the target audiences. Unfortunately, none of that topic is mentioned in the book. That has been said, even if you are an experienced python programmer in bioinformatics, the book's focus on Python 3 and a lot of useful templates might serve well as a quick reference if you are looking for something you do not have direct experience before.


What do Python 2.x programmers need to know about Python 3?

With the latest major Python release, creator Guido van Rossum saw the opportunity to tidy up his famous scripting language. What is different about Python 3.0? In this article, I offer some highlights for Python programmers who are thinking about making the switch to 3.x.

Read full article as PDF "

[Jun 20, 2009] A Python Client/Server Tutorial by Phillip Watts

June 16, 2009

There can be many reasons why you might need a client/server application. For a simple example, purchasing for a small retail chain might need up to the minute stock levels on a central server. The point-of-sale application in the stores would then need to post inventory transactions to the central server in real-time.

This application can easily be coded in Python with performance levels of thousands of transactions per second on a desktop PC. Simple sample programs for the server and client sides are listed below, with discussions following.

[Jul 8, 2008] Python Call Graph 0.5.1 by Gerald Kaszuba

About: pycallgraph is a Python library that creates call graphs for Python programs.

About: pycallgraph is a Python library that creates call graphs for Python programs.

Changes: The "pycg" command line tool was renamed to "pycallgraph" due to naming conflicts with other packages.

[Jun 23, 2008] Project details for cfv

cfv is a utility to both test and create .sfv (Simple File Verify), .csv, .crc, .md5(sfv style), md5sum, BSD md5, sha1sum, and .torrent checksum verification files. It also includes test-only support for .par and .par2 files. These files are commonly used to ensure the correct retrieval or storage of data.

Release focus: Major bugfixes

Help output is printed to stdout under non-error conditions. A mmap file descriptor leak in Python 2.4.2 was worked around. The different module layout of BitTorrent 5.x is supported. A "struct integer overflow masking is deprecated" warning was fixed. The --private_torrent flag was added. A bug was worked around in 64-bit Python version 2.5 and later which causes checksums of files larger than 4GB to be incorrectly calculated when using mmap.

[Jun 20, 2008] BitRock Download Web Stacks

BitRock Web Stacks provide you with the easiest way to install and run the LAMP platform in a variety of Linux distributions. BitRock Web Stacks are free to download and use under the terms of the Apache License 2.0. To learn more about our licensing policies, click here.

You can find up-to-date WAMP, LAMP and MAMP stacks at the BitNami open source website. In addition to those, you will find freely available application stacks for popular open source software such as Joomla!, Drupal, Mediawiki and Roller. Just like BitRock Web Stacks, they include everything you need to run the software and come packaged in a fast, easy to use installer.

BitRock Web Stacks contain several open source tools and libraries. Please be sure that you read and comply with all of the applicable licenses. If you are a MySQL Network subscriber (or would like to purchase a subscription) and want to use a version of LAMPStack that contains the MySQL Certified binaries, please send an email to

For further information, including supported platforms, component versions, documentation, and support, please visit our solutions section.

[Mar 12, 2008] Terminator - Multiple GNOME terminals in one window

Rewrite of screen in Python?

This is a project to produce an efficient way of filling a large area of screen space with terminals. This is done by splitting the window into a resizeable grid of terminals. As such, you can produce a very flexible arrangements of terminals for different tasks.

Read me

Terminator 0.8.1
by Chris Jones <>

This is a little python script to give me lots of terminals in a single window, saving me valuable laptop screen space otherwise wasted on window decorations and not quite being able to fill the screen with terminals.

Right now it will open a single window with one terminal and it will (to some degree) mirror the settings of your default gnome-terminal profile in gconf. Eventually this will be extended and improved to offer profile selection per-terminal, configuration thereof and the ability to alter the number of terminals and save meta-profiles.

You can create more terminals by right clicking on one and choosing to split it vertically or horizontally. You can get rid of a terminal by right clicking on it and choosing Close. ctrl-shift-o and ctrl-shift-e will also effect the splitting.

ctrl-shift-n and ctrl-shift-p will shift focus to the next/previous terminal respectively, and ctrl-shift-w will close the current terminal and ctrl-shift-q the current window

Ask questions at:
Please report all bugs to

It's quite shamelessly based on code in the from the vte widget package, and on the gedit terminal plugin (which was fantastically useful). is not my code and is copyright its original author. While it does not contain any specific licensing information in it, the VTE package appears to be licenced under LGPL v2.

the gedit terminal plugin is part of the gedit-plugins package, which is licenced under GPL v2 or later.

I am thus licensing Terminator as GPL v2 only.

Cristian Grada provided the icon under the same licence.

[Apr 3, 2007] Charming Python Python elegance and warts, Part 1

Generators as not-quite-sequences

Over several versions, Python has hugely enhanced its "laziness." For several versions, we have had generators defined with the yield statement in a function body. But along the way we also got the itertools modules to combine and create various types of iterators. We have the iter() built-in function to turn many sequence-like objects into iterators. With Python 2.4, we got generator expressions, and with 2.5 we will get enhanced generators that make writing coroutines easier. Moreover, more and more Python objects have become iterators or iterator-like; for example, what used to require the .xreadlines() method or before that the xreadlines module, is now simply the default behavior of open() to read files.

Similarly, looping through a dict lazily used to require the .iterkeys() method; now it is just the default for key in dct behavior. Functions like xrange() are a bit "special" in being generator-like, but neither quite a real iterator (no .next() method), nor a realized list like range() returns. However, enumerate() returns a true generator, and usually does what you had earlier wanted xrange() for. And itertools.count() is another lazy call that does almost the same thing as xrange(), but as a full-fledged iterator.

Python is strongly moving towards lazily constructing sequence-like objects; and overall this is an excellent direction. Lazy pseudo-sequences both save memory space and speed up operations (especially when dealing with very large sequence-like "things").

The problem is that Python still has a schizoaffective condition when it comes to deciding what the differences and similarities between "hard" sequences and iterators are. The troublesome part of this is that it really violates Python's idea of "duck typing": the ability to use a given object for a purpose just as long as it has the right behaviors, but not necessarily any inheritance or type restriction. The various things that are iterators or iterator-like sometimes act sequence-like, but other times do not; conversely, sequences often act iterator-like, but not always. Outside of those steeped in Python arcana, what does what is not obvious.


The main point of similarity is that everything that is sequence- or iterator-like lets you loop over it, whether using a for loop, a list comprehension, or a generator comprehension. Past that, divergences occur. The most important of these differences is that sequences can be indexed, and directly sliced, while iterators cannot. In fact, indexing into a sequence is probably the most common thing you ever do with a sequence -- why on earth does it fall down so badly on iterators? For example:

Listing 9. Sequence-like and iterator-like things
>>> r = range(10)

>>> i = iter(r)

>>> x = xrange(10)

>>> g = itertools.takewhile(lambda n: n<10, itertools.count())


For all of these, you can use for n in thing. In fact, if you "concretize" any of them with list(thing), you wind up with exactly the same result. But if you wish to obtain a specific item -- or a slice of a few items -- you need to start caring about the exact type of thing. For example:

Listing 10. When indexing succeeds and fails
>>> r[4]

>>> i[4]
TypeError: unindexable object

With enough contortions, you can get an item for every type of sequence/iterator. One way is to loop until you get there. Another hackish combination might be something like:

Listing 11. Contortions to obtain an index
>>> thing, temp = itertools.tee(thing)

>>> zip(temp, '.'*5)[-1][0]

The pre-call to itertools.tee() preserves the original iterator. For a slice, you might use the itertools.islice() function, wrapped up in contortions.

Listing 12. Contortions to obtain a slice
>>> r[4:9:2]
[4, 6, 8]

>>> list(itertools.islice(r,4,9,2))  # works for iterators
[4, 6, 8]

A class wrapper

You might combine these techniques into a class wrapper for convenience, using some magic methods:

Listing 13. Making iterators indexable
>>> class Indexable(object):
...     def __init__(self, it):
... = it
...     def __getitem__(self, x):
..., temp = itertools.tee(
...         if type(x) is slice:
...             return list(itertools.islice(, x.start, x.stop, x.step))
...         else:
...             return zip(temp, range(x+1))[-1][0]
...     def __iter__(self):
..., temp = itertools.tee(
...         return temp

>>> integers = Indexable(itertools.count())

>>> integers[4]
>>> integers[4:9:2]
[4, 6, 8]

So with some effort, you can coax an object to behave like both a sequence and an iterator. But this much effort should really not be necessary; indexing and slicing should "just work" whether a concrete sequence or a iterator is involved.

Notice that the Indexable class wrapper is still not as flexible as might be desirable. The main problem is that we create a new copy of the iterator every time. A better approach would be to cache the head of the sequence when we slice it, then use that cached head for future access of elements already examined. Of course, there is a trade-off between memory used and the speed penalty of running through the iterator. Nonetheless, the best thing would be if Python itself would do all of this "behind the scenes" -- the behavior might be fine-tuned somehow by "power users," but average programmers should not have to think about any of this.

In the next installment in this series, I'll discuss accessing methods using attribute syntax.

[Oct 26, 2006] -- What's New in Python 2.5

It's clear that Python is under pressure from Ruby :-)

It's hard to believe Python is more than 15 years old already. While that may seem old for a programming language, in the case of Python it means the language is mature. In spite of its age, the newest versions of Python are powerful, providing everything you would expect from a modern programming language.

This article provides a rundown of the new and important features of Python 2.5. I assume that you're familiar with Python and aren't looking for an introductory tutorial, although in some cases I do introduce some of the material, such as generators.

[Sep 30, 2006] Python 2.5 Release We are pleased to announce the release of Python 2.5 (FINAL), the final, production release of Python 2.5, on September 19th, 2006.

with open('/etc/passwd', 'r') as f:
    for line in f:
        print line
        ... more processing code ...
why to stray from mainstream C-style is unlear to me. When developing computer language syntax, natural language imitation should not be the priority - also being different for the sake of being different is so very early 90s
cout << ( a==b ? "first option" : "second option" )

[Sept 20, 2006] Python 101 cheat sheet

[01 Feb 2000] Python columnist Evelyn Mitchell brings you a quick reference and learning tools for newbies who want to get to know the language. Print it, keep it close at hand, and get down to programming!

[Jul 27, 2006] Microsoft Ships Python on .Net by Darryl K. Taft


Microsoft has shipped the release candidate for IronPython 1.0 on its CodePlex community source site.

In a July 25 blog post, S. "Soma" Somasegar, corporate vice president of Microsoft's developer division, praised the team for getting to a release candidate for a dynamic language that runs on the Microsoft CLI (Common Language Infrastructure). Microsoft designed the CLI to support a variety of programming languages. Indeed, "one of the great features of the .Net framework is the Common Language Infrastructure," Somasegar said.

"IronPython is a project that implements the dynamic object-oriented Python language on top of the CLI," Somasegar said. IronPython is both well-integrated with the .Net Framework and is a true implementation of the Python language, he said.

And ".Net integration means that this rich programming framework is available to Python developers and that they can interoperate with other .Net languages and tools," Somasegar said. "All of Python's dynamic features like an interactive interpreter, dynamically modifying objects and even metaclasses are available. IronPython also leverages the CLI to achieve good performance, running up to 1.5 times faster than the standard C-based Python implementation on the standard Pystone benchmark."

Click here to read an eWEEK interview with Python creator Guido van Rossum.

Moreover, the download of the release candidate for IronPython 1.0 "includes a tutorial which gives .Net programmers a great way to get started with Python and Python programmers a great way to get started with .Net," Somasegar said.

Somasegar said he finds it "exciting to see that the Visual Studio SDK [software development kit] team has used the IronPython project as a chance to show language developers how they can build support for their language into Visual Studio. They have created a sample, with source, that shows some of the basics required for integrating into the IDE including the project system, debugger, interactive console, IntelliSense and even the Windows forms designer. "

IronPython is the creation of Jim Hugunin, a developer on the Microsoft CLR (Common Language Runtime) team. Hugunin joined Microsoft in 2004.

In a statement written in July 2004, Hugunin said: "My plan was to do a little work and then write a short pithy article called, 'Why .Net is a terrible platform for dynamic languages.' My plans changed when I found the CLR to be an excellent target for the highly dynamic Python language. Since then I've spent much of my spare time working on the development of IronPython."

However, Hugunin said he grew frustrated with the slow pace of progress he could make by working on the project only in his spare time, so he decided to join Microsoft.

IronPython is governed by Microsoft's Shared Source license.

Dig Deep into Python Internals by Gigi Sayfan Part 1 of 2

Python's objects are basically a bunch of attributes. These attributes include the type of the object, fields, methods, and base classes. Attributes are also objects, accessible through their containing objects.
Python, the open source scripting language, has grown tremendously popular in the last five years-and with good reason. Python boasts a sophisticated object model that wise developers can exploit in ways that Java, C++, and C# developers can only dream of.

This article is the first in a two-part series that will dig deep to explore the fascinating new-style Python object model, which was introduced in Python 2.2 and improved in 2.3 and 2.4. The object model and type system are very dynamic and allow quite a few interesting tricks. In this article I will describe the object, model, and type system; explore various entities; explain the life cycle of an object; and introduce some of the countless ways to modify and customize almost everything you thought immutable at runtime.

The Python Object Model

Python's objects are basically a bunch of attributes. These attributes include the type of the object, fields, methods, and base classes. Attributes are also objects, accessible through their containing objects.

The built-in dir() function is your best friend when it comes to exploring python objects. It is designed for interactive use and, thereby, returns a list of attributes that the implementers of the dir function thought would be relevant for interactive exploration. This output, however, is just a subset of all the attributes of the object. The code sample below shows the dir function in action. It turns out that the integer 5 has many attributes that seem like mathematical operations on integers.


['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__', '__delattr__', '__div__', 
'__divmod__', '__doc__', '__float__', '__floordiv__', '__getattribute__', '__getnewargs__', 
'__hash__', '__hex__', '__init__', '__int__', '__invert__', '__long__', '__lshift__', '__mod__',
'__mul__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '
__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__',
'__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__',
'__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__str__', '__sub__', '__truediv__', '__xor__']
The function foo has many attributes too. The most important one is __call__ which means it is a callable type. You do want to call your functions, don't you?

def foo()

['__call__', '__class__', '__delattr__', '__dict__', '__doc__', '__get__', '__getattribute__',
'__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__str__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 
'func_doc', 'func_globals', 'func_name']
Next I'll define a class called 'A' with two methods, __init__ and dump, and an instance field 'x' and also an instance 'a' of this class. The dir function shows that the class's attributes include the methods and the instance has all the class attributes as well as the instance field.

>>> class A(object):
...     def __init__(self):
...             self.x = 3
...     def dump(self):
...             print self.x
>>> dir(A)

['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__', '__hash__', '__init__', 
'__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', 
'__weakref__', 'dump']

>>> a = A()
>>> dir(a)

['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__', '__hash__', '__init__',
'__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__',
'__weakref__', 'dump', 'x']

The Python Type System
Python has many types. Much more than you find in most languages (at least explicitly). This means that the interpreter has a lot of information at runtime and the programmer can take advantage of it by manipulating types at runtime. Most types are defined in the types module, which is shown in the code immediately below. Types come in various flavors: There are built-in types, new-style classes (derived from object), and old-style classes (pre Python 2.2). I will not discuss old-style classes since they are frowned upon by everybody and exist only for backward compatibility.

>>> import types
>>> dir(types)

['BooleanType', 'BufferType', 'BuiltinFunctionType', 'BuiltinMethodType', 'ClassType', 'CodeType',
'ComplexType', 'DictProxyType', 'DictType', 'DictionaryType', 'EllipsisType', 'FileType', 
'FloatType', 'FrameType', 'FunctionType', 'GeneratorType', 'InstanceType', 'IntType', 'LambdaType',
'ListType', 'LongType', 'MethodType', 'ModuleType', 'NoneType', 'NotImplementedType', 'ObjectType',
'SliceType', 'StringType', 'StringTypes', 'TracebackType', 'TupleType', 'TypeType', 
'UnboundMethodType', 'UnicodeType', 'XRangeType', '__builtins__', '__doc__', '__file__', '__name__']
Python's type system is object-oriented. Every type (including built-in types) is derived (directly or indirectly) from object. Another interesting fact is that types, classes and functions are all first-class citizens and have a type themselves. Before I delve down into some juicy demonstrations let me introduce the built-in function 'type'. This function returns the type of any object (and also serves as a type factory). Most of these types are listed in the types module, and some of them have a short name. Below I've unleashed the 'type' function on several objects: None, integer, list, the object type, type itself, and even the 'types' module. As you can see the type of all types (list type, object, and type itself) is 'type' or in its full name types.TypeType (no kidding, that's the name of the type).

>>> type(None)
<type 'NoneType'>

>>> type(5)
<type 'int'>

>>> x = [1,2,3]
>>> type(x)
<type 'list'>

>>> type(list)
<type 'type'> 

>>> type(type)
<type 'type'>

>>> type(object)
<type 'type'>

>>> import types
>>> type(types)
<type 'module'>

>>> type==types.TypeType
What is the type of classes and instances? Well, classes are types of course, so their type is always 'type' (regardless of inheritance). The type of class instances is their class.

>>> class A(object):
...     pass

>>> a = A()

>>> type(A)
<type 'type'>

>>> type(a)
<class '__main__.A'>

>>> a.__class__
<class '__main__.A'>
It's time for the scary part-a vicious cycle: 'type' is the type of object, but object is the base class of type. Come again? 'type' is the type of object, but object is the base class of type. That's right-circular dependency. 'object' is a 'type' and 'type' is an 'object'.

>>> type(object)
<type 'type'>

>>> type.__bases__
(<type 'object'>,)

>>> object.__bases__
How can it be? Well, since the core entities in Python are not implemented themselves in Python (there is PyPy but that's another story) this is not really an issue. The 'object' and 'type' are not really implemented in terms of each other.

The one important thing to take home from this is that types are objects and are therefore subject to all the ramifications thereof. I'll discuss those ramifications very shortly.

Instances, Classes, Class Factories, and Metaclasses
When I talk about instances I mean object instances of a class derived from object (or the object class itself). A class is a type, but as you recall it is also an object (of type 'type'). This allows classes to be created and manipulated at runtime. This code demonstrates how to create a class at runtime and instantiate it.

def init_method(self, x, y):
      self.x = x
      self.y = y
def dumpSum_method(self):
      print self.x + self.y

D = type('DynamicClass',
                   {'__init__':init_method, 'dumpSum':dumpSum_method})
d = D(3, 4)
As you can see I created two functions (init_method and dumpSum_method) and then invoked the ubiquitous 'type' function as a class factory to create a class called 'DynamicClass,' which is derived from 'object' and has two methods (one is the __init__ constructor).

It is pretty simple to create the functions themselves on the fly too. Note that the methods I attached to the class are regular functions that can be called directly (provided their self-argument has x and y members, similar to C++ template arguments).

Functions, Methods and other Callables
Python enjoys a plethora of callable objects. Callable objects are function-like objects that can be invoked by calling their () operator. Callable objects include plain functions (module-level), methods (bound, unbound, static, and class methods) and any other object that has a __call__ function attribute (either in its own dictionary, via one of its ancestors, or through a descriptor).

It's truly complicated so the bottom line is to remember that all these flavors of callables eventually boil down to a plain function. For example, in the code below the class A defines a method named 'foo' that can be accessed through:

  1. an instance so it is a bound method (bound implicitly to its instance)
  2. through the class A itself and then it is an unbound method (the instance must be supplied explicitly)
  3. directly from A's dictionary, in which case it is a plain function (but you must still call it with an instance of A).
So, all methods are actually functions but the runtime assigns different types depending on how you access it.

class A(object):
    def foo(self):
        print 'I am foo'

>>> a = A()
<bound method of <__main__.A object at 0x00A13EB0>>

<unbound method>

>>> A.__dict__['foo']
<function foo at 0x00A0A3F0>

I am foo
I am foo
>>> A.__dict__['foo'](a)
I am foo
Let's talk about static methods and class methods. Static methods are very simple. They are similar to static methods in Java/C++/C#. They are scoped by their class but they don't have a special first argument like instance methods or class methods do; they act just like a regular function (you must provide all the arguments since they can't access any instance fields). Static methods are not so useful in Python because regular module-level functions are already scoped by their module and they are the natural mapping to static methods in Java/C++/C#.

Class methods are an exotic animal. Their first argument is the class itself (traditionally named cls) and they are used primarily in esoteric scenarios. Static and class methods actually return a wrapper around the original function object. In the code that follows, note that the static method may be accessed either through an instance or through a class. The class method accepts a cls instance as its first argument but cls is invoked through a class directly (no explicit class argument). This is different from an unbound method where you have to provide an instance explicitly as first argument.

class A(object):
    def foo():
        print 'I am foo'
    def foo2(cls):
        print 'I am foo2', cls 
    def foo3(self):
        print 'I am foo3', self       
>>> a = A()
I am foo

I am foo

>>> A.foo2()
I am foo2 <class '__main__.A'>

>>> a.foo3()
I am foo3 <__main__.A object at 0x00A1AA10>
Note that classes are callable objects by themselves and operate as instance factories. When you "call" a class you get an instance of that class as a result.

A different kind of callable object is an object that has a __call__ method. If you want to pass around a function-like object with its context intact, __call__ can be a good thing. Listing 1 features a simple 'add' function that can be replaced with a caching adder class that stores results of previous calculations. First, notice that the test function expects a function-like object called 'add' and it just invokes it as a function. The 'test' function is called twice-once with a simple function and a second time with the caching adder instance. Continuations in Python can also be implemented using __call__ but that's another article.

Metaclasse is a concept that doesn't exist in today's mainstream programming languages. A metaclass is a class whose instances are classes. You already encountered a meta-class in this article called 'type'. When you invoke "type" with a class name, a base-classes tuple, and an attribute dictionary, the method creates a new user-defined class of the specified type. So the __class__ attribute of every class always contains its meta-class (normally 'type').

That's nice, but what can you do with a metaclass? It turns out, you can do plenty. Metaclasses allow you to control everything about the class that will be created: name, base classes, methods, and fields. How is it different from simply defining any class you want or even creating a class dynamically on the fly? Well, it allows you to intercept the creation of classes that are predefined as in aspect-oriented programming. This is a killer feature that I'll be discussing in a follow-up to this article.

After a class is defined, the interpreter looks for a meta-class. If it finds one it invokes its __init__ method with the class instance and the meta-class gets a stab at modifying it (or returning a completely different class). The interpreter will use the class object returned from the meta-class to create instances of this class.

So, how do you stick a custom metaclass on a class (new-style classes only)? Either you declare a __metaclass__ field or one of your ancestors has a __metaclass__ field. The inheritance method is intriguing because Python allows multiple inheritance. If you inherit from two classes that have custom metaclasses you are in for a treat-one of the metaclasses must derive from another. The actual metaclass of your class will be the most derived metaclass:

class M1(type): pass
class M2(M1):   pass

class C2(object): __metaclass__=M2    
class C1(object): __metaclass__=M1
class C3(C1, C2): pass

classes = [C1, C2, C3]
for c in classes:
    print c, c.__class__
    print '------------'                 

<class '__main__.C1'> <class '__main__.M1'>
<class '__main__.C2'> <class '__main__.M2'>
<class '__main__.C3'> <class '__main__.M2'>

Day In The Life of a Python Object
To get a feel for all the dynamics involved in using Python objects let's track a plain object (no tricks) starting from its class definition, through its class instantiation, access its attributes, and see it to its demise. Later on I'll introduce the hooks that allow you to control and modify this workflow.

The best way to go about it is with a monstrous simulation. Listing 2 contains a simulation of a bunch of monsters chasing and eating some poor person. There are three classes involved: a base Monster class, a MurderousHorror class that inherits from the Monster base class, and a Person class that gets to be the victim. I will concentrate on the MurderousHorror class and its instances.

Class Definition
MurderousHorror inherits the 'frighten' and 'eat' methods from Monster and adds a 'chase' method and a 'speed' field. The 'hungry_monsters' class field stores a list of all the hungry monsters and is always available through the class, base class, or instance (Monster.hungry_monsters, MurderousHorror.hungry_monsters, or m1.hungry_monsters). In the code below you can see (via the handy 'dir' function) the MurderousHorror class and its m1 instance. Note that methods such as 'eat,' 'frighten,' and 'chase' appear in both, but instance fields such as 'hungry' and 'speed' appear only in m1. The reason is that instance methods can be accessed through the class as unbound methods, but instance fields can be accessed only through an instance.

class NoInit(object):
    def foo(self):
        self.x = 5
    def bar(self):
        print self.x
if __name__ == '__main__':            
    ni = NoInit()
    assert(not ni.__dict__.has_key('x'))
    except AttributeError, e:
        print e

'NoInit' object has no attribute 'x'
Object Instantiation and Initialization
Instantiation in Python is a two-phase process. First, __new__ is called with the class as a first argument, and later as the rest of the arguments, and should return an uninitialized instance of the class. Afterward, __init__ is called with the instance as first argument. (You can read more about __new__ in the Python reference manual.)

When a MurderousHorror is instantiated __init__ is the first method called. __init__ is similar to a constructor in C++/Java/C#. The instance calls the Monster base class's __init__ and initializes its speed field. The difference between Python and C++/Java/C# is that in Python there is no notion of a parameter-less default constructor, which, in other languages, is automatically generated for every class that doesn't have one. Also, there is no automatic call to the base class' default __init__ if the derived class doesn't call it explicitly. This is quite understandable since no default __init__ is generated.

In C++/Java/C# you declare instance variables in the class body. In Python you define them inside a method by explicitly specifying 'self.SomeAttribute'. So, if there is no __init__ method to a class it means its instances have no instance fields initially. That's right. It doesn't HAVE any instance fields. Not even uninitialized instance fields.

The previous code sample (above) is a perfect example of this phenomenon. The NoInit class has no __init__ method. The x field is created (put into its __dict__) only when foo() is called. When the program calls immediately after instantiation the 'x' attribute is not there yet, so I get an 'AttributeError' exception. Because my code is robust, fault tolerant, and self healing (in carefully staged toy programs), it bravely recovers and continues to the horizon by calling foo(), thus creating the 'x' attribute, and can print 5 successfully.

Note that in Python __init__ is not much more then a regular method. It is called indeed on instantiation, but you are free to call it again after initialization and you may call other __init__ methods on the same object from the original __init__. This last capability is also available in C#, where it is called constructor chaining. It is useful when you have multiple constructors that share common initialization, which is also one of the constructors/initializers. In this case you don't need to define another special method that contains the common code and call it from all the constructors/initializers; you can just call the shared constructor/initializer directly from all of them.

Attribute Access
An attribute is an object that can be accessed from its host using the dot notation. There is no difference at the attribute access level between methods and fields. Methods are first-class citizens in Python. When you invoke a method of an object, the method object is looked up first using the same mechanism as a non-callable field. Then the () operator is applied to the returned object. This example demonstrates this two-step process:

class A(object):
    def foo(self):
        print 3
if __name__ == '__main__':            
    a = A()
    f =
    print f
    print f.im_self

<bound method of <__main__.A object at 0x00A03EB0>>
<__main__.A object at 0x00A03EB0>
The code retrieves the bound method object and assigns it to a local variable 'f'. 'f' is a bound method object, which means its im_self attribute points to the instance to which it is bound. Finally, is invoked through the instance ( and by calling f directly with identical results. Assigning bound methods to local variables is a well known optimization technique due to the high cost of attribute lookup. If you have a piece of Python code that seems to perform under the weather there is a good chance you can find a tight loop that does a lot of redundant lookups. I will talk later about all the ways you can customize the attribute access process and why it is so costly.

The __del__ method is called when an instance is about to be destroyed (its reference count reaches 0). It is not guaranteed that the method will ever be called in situations such as circular references between objects or references to the object in an exception. Also the implementation of __del__ may create a new reference to its instance so it will not be destroyed after all. Even when everything is simple and __del__ is called, there is no telling when it will actually be called due to the nature of the garbage collector. The bottom line is if you need to free some scarce resource attached to an object do it explicitly when you are done using it and don't wait for __del__.

A try-finally block is a popular choice for garbage collection since it guarantees the resource will be released even in the face of exceptions. The last reason not to use is __del__ is that its interaction with the 'del' built-in function may confuse programmers. 'del' simply decrements the reference count by 1 and doesn't call '__del__' or cause the object to be magically destroyed. In the next code sample I use the sys.getrefcount() function to determine the reference count to an object before and after calling 'del'. Note that I subtract 1 from sys.getrefcount() result because it also counts the temporary reference to its own argument.

import sys

class A(object):
    def __del__(self):
        print "That's it for me"
if __name__ == '__main__':            
    a = A()
    b = a
    print sys.getrefcount(a)-1
    del b
    print sys.getrefcount(a)-1


That's it for me

Hacking Python

Let the games begin. In this section I will explore different ways to customize attribute access. The topics include the __getattribute__ hook, descriptors, and properties.

- __getattr__, __setattr__ and __getattribute__
These special methods control attribute access to class instances. The standard algorithm for attribute lookup returns an attribute from the instance dictionary or one of its base class's dictionaries (descriptors will be described in the next section). They are supposed to return an attribute object or raise AttributeError exception. If you define some of these methods in your class they will be called upon during attribute access under some conditions. Listing 3 is an interactive example. It is designed to allow you to play around with it and comment out various functions to see the effect. It introduces the class A with a single 'x' attribute. It has __getattr__, __setattr__, and __getattribute__ methods. __getattribute__ and __setattr__ simply forward any attribute access to the default (lookup or set value in dictionary). __getattr__ always returns 7. The main program starts by assigning 6 to the non-existing attribute 'y' (happens via __setattr__) and then prints the preexisting 'x', the newly created 'y', and the still non-existent 'z'. 'x' and 'y' exist now, so they are accessible via __getattribute__. 'z' doesn't exist so __getattribute__ fails and __getattr__ gets called and returns 7. (Author's Note: This is contrary to the documentation. The documentation claims if __getattribute__ is defined, __getattr__ will never be called, but this is not the actual behavior.)

A descriptor is an object that implements three methods __get__, __set__, and __delete__. If you put such a descriptor in the __dict__ of some object then whenever the attribute with the name of the descriptor is accessed one of the special methods is executed according to the access type (__get__ for read, __set__ for write, and __delete__ for delete).This simple enough indirection scheme allows total control on attribute access.

The following code sample shows a silly write-only descriptor used to store passwords. Its value may not be read nor deleted (it throws AttributeError exception). Of course the descriptor object itself and the password can be accessed directly through A.__dict__['password'].

class WriteOnlyDescriptor(object):
    def __init__(self): = {}

    def __get__(self, obj, objtype=None):
        raise AttributeError 

    def __set__(self, obj, val):[obj] = val
    def __del(self, obj):
        raise AttributeError

class A(object):
    password = WriteOnlyDescriptor()
if __name__ == '__main__': 
    a = A()
        print a.password
    except AttributeError, e:
        print e.__doc__
    a.password = 'secret'
    print A.__dict__['password'].store[a]
Descriptors with both __get__ and __set__ methods are called data descriptors. In general, data descriptors take lookup precedence over instance dictionaries, which take precedence over non-data descriptors. If you try to assign a value to a non-data descriptor attribute the new value will simply replace the descriptor. However, if you try to assign a value to a data descriptor the __set__ method of the descriptor will be called.

Properties are managed attributes. When you define a property you can provide get, set, and del functions as well as a doc string. When the attribute is accessed the corresponding functions are called. This sounds a lot like descriptors and indeed it is mostly a syntactic sugar for a common case.

This final code sample is another version of the silly password store using properties. The __password field is "private." Class A has a 'password' property that, when accessed as in 'a.password,' invokes the getPassword or setPassword methods. Because the getPassword method raises the AttributeError exception, the only way to get to the actual value of the __password attribute is by circumventing the Python fake privacy mechanism. This is done by prefixing the attribute name with an underscore and the class name a._A__password. How is it different from descriptors? It is less powerful and flexible but more pleasing to the eye. You must define an external descriptor class with descriptors. This means you can use the same descriptor for different classes and also that you can replace regular attributes with descriptors at runtime.

class A(object):
    def __init__(self):
        self.__password = None

    def getPassword(self):
        raise AttributeError

    def setPassword(self, password):        
        self.__password = password

    password = property(getPassword, setPassword)    
if __name__ == '__main__':
    a = A()
        print a.password
    except AttributeError, e:
        print e.__doc__
    a.password = 'secret'
    print a._A__password
Attribute not found.
Properties are more cohesive. The get, set functions are usually methods of the same class that contain the property definition. For programmers coming from languages such as C# or Delphi, Properties will make them feel right at home (too bad Java is still sticking to its verbose java beans).

Python's Richness a Mixed Blessing
There are many mechanisms to control attribute access at runtime starting with just dynamic replacement of attribute in the __dict__ at runtime. Other methods include the __getattr__/__setattr, descriptors, and finally properties. This richness is a mixed blessing. It gives you a lot of choice, which is good because you can choose whatever is appropriate to your case. But, it is also bad because you HAVE to choose even if you just choose to ignore it. The assumption, for better or worse, is that people who work at this level should be able to handle the mental load.

In my next article, I will pick up where I've left off. I'll begin by contrasting metaclasses with decorators, then explore the Python execution model, and explain how to examine stack frames at runtime. Finally, I'll demonstrate how to augment the Python language itself using these techniques. I'll introduce a private access checking feature that can be enforced at runtime.

Gigi Sayfan is a software developer working on CELL applications for Sony Playstation3. He specializes in cross-platform object-oriented programming in C/C++/C#/Python with emphasis on large-scale distributed systems.

[Feb 20, 2006] Project details for Meld

Meld is a GNOME 2 visual diff and merge tool. It integrates especially well with CVS. The diff viewer lets you edit files in place (diffs update dynamically), and a middle column shows detailed changes and allows merges. The margins show location of changes for easy browsing, and it also features a tabbed interface that allows you to open many diffs at once.

Information about CWM - TimBL's Closed World Machine

CWM is a popular Semantic Web program that can do the following tasks:-

CWM was written in Python from 2000-10 onwards by Tim Berners-Lee and Dan Connolly of the W3C.

This resource is provided so that people can use CWM, find out what it does (documentation used to be sparse), and perhaps even contribute to its development.

What's new in Python 2.4

New or upgraded built-ins

Extending and Embedding the Python Interpreter

Extending Python with C (Score:1)
by frehe (6916) <> on Wednesday April 16, @03:41PM (#5745981)

If you need more speed than native Python provides, you can always write code in C and wrap it so it is callable from Python. The wrapping is really easy to do, once you have understood the general concepts involved in it. The product I currently work on has about 10000 lines of C code (crypto and networking) which is used this way, and it works perfectly. For more information about extending Python with C, see:

Extending and Embedding the Python Interpreter []

Dive Into Python Python for experienced programmers

Dive Into Python is a free Python book for experienced programmers. You can read the book online, or download it in a variety of formats. It is also available in multiple languages.

This book is still being written. The first three chapters are a solid overview of Python programming. Chapters covering HTML processing, XML processing, and unit testing are complete, and a chapter covering regression testing is in progress. This is not a teaser site for some larger work for sale; all new content will be published here, for free, as soon as it's ready. You can read the revision history to see what's new. Updated 28 July 2002

Wing IDE for Python Python IDS that includes source browser and editor. The editor supports folding

Reference Manual Wing IDE Version 1.1.4

Python IDS that includes source browser and editor.

developerWorks Linux Open source projects Charming Python Iterators and simple generators

What's New in Python 2.2

Generators is a very interesting feature of Python 2.2 that is essentially a co-routine.

Generators are another new feature, one that interacts with the introduction of iterators.

You're doubtless familiar with how function calls work in Python or C. When you call a function, it gets a private namespace where its local variables are created. When the function reaches a return statement, the local variables are destroyed and the resulting value is returned to the caller. A later call to the same function will get a fresh new set of local variables. But, what if the local variables weren't thrown away on exiting a function? What if you could later resume the function where it left off? This is what generators provide; they can be thought of as resumable functions.

Here's the simplest example of a generator function:

def generate_ints(N):
    for i in range(N):
        yield i

A new keyword, yield, was introduced for generators. Any function containing a yield statement is a generator function; this is detected by Python's bytecode compiler which compiles the function specially as a result. Because a new keyword was introduced, generators must be explicitly enabled in a module by including a from __future__ import generators statement near the top of the module's source code. In Python 2.3 this statement will become unnecessary.

When you call a generator function, it doesn't return a single value; instead it returns a generator object that supports the iterator protocol. On executing the yield statement, the generator outputs the value of i, similar to a return statement. The big difference between yield and a return statement is that on reaching a yield the generator's state of execution is suspended and local variables are preserved. On the next call to the generator's .next() method, the function will resume executing immediately after the yield statement. (For complicated reasons, the yield statement isn't allowed inside the try block of a try...finally statement; read PEP 255 for a full explanation of the interaction between yield and exceptions.)

Here's a sample usage of the generate_ints generator:

>>> gen = generate_ints(3)
>>> gen
<generator object at 0x8117f90>
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "<stdin>", line 2, in generate_ints

You could equally write for i in generate_ints(5), or a,b,c = generate_ints(3).

Inside a generator function, the return statement can only be used without a value, and signals the end of the procession of values; afterwards the generator cannot return any further values. return with a value, such as return 5, is a syntax error inside a generator function. The end of the generator's results can also be indicated by raising StopIteration manually, or by just letting the flow of execution fall off the bottom of the function.

You could achieve the effect of generators manually by writing your own class and storing all the local variables of the generator as instance variables. For example, returning a list of integers could be done by setting self.count to 0, and having the next() method increment self.count and return it. However, for a moderately complicated generator, writing a corresponding class would be much messier. Lib/test/ contains a number of more interesting examples. The simplest one implements an in-order traversal of a tree using generators recursively.

# A recursive generator that generates Tree leaves in in-order.
def inorder(t):
    if t:
        for x in inorder(t.left):
            yield x
        yield t.label
        for x in inorder(t.right):
            yield x

Two other examples in Lib/test/ produce solutions for the N-Queens problem (placing queens on an chess board so that no queen threatens another) and the Knight's Tour (a route that takes a knight to every square of an chessboard without visiting any square twice).

The idea of generators comes from other programming languages, especially Icon (, where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from ``An Overview of the Icon Programming Language'' at gives an idea of what this looks like:

sentence := "Store it in the neighboring harbor"
if (i := find("or", sentence)) > 5 then write(i)

In Icon the find() function returns the indexes at which the substring ``or'' is found: 3, 23, 33. In the if statement, i is first assigned