Software Engineering: A study akin to numerology and astrology, but lacking the precision
of the former and the success of the latter.
KISS Principle/kis' prin'si-pl/ n.
"Keep It Simple, Stupid". A maxim often invoked when discussing design to fend off
creeping
featurism and control development complexity. Possibly related to the
marketroid
maxim on sales presentations, "Keep It Short and Simple".
creeping featurism/kree'ping fee'chr-izm/ n.[common]1. Describes a systematic tendency to load more
chrome and
features onto
systems at the expense of whatever elegance they may have possessed when originally designed.
See also
feeping
creaturism. "You know, the main problem with
BSD Unix has always
been creeping featurism." 2. More generally, the tendency for anything complicated to become
even more complicated because people keep saying "Gee, it would be even better if it had this
feature too". (See
feature.) The result is usually a patchwork because it grew one ad-hoc step at a time, rather
than being planned. Planning is a lot of work, but it's easy to add just one extra little feature
to help someone ... and then another ... and another... When creeping featurism gets out of hand,
it's like a cancer. Usually this term is used to describe computer programs, but it could also
be said of the federal government, the IRS 1040 form, and new cars. A similar phenomenon sometimes
afflicts conscious redesigns; see
second-system
effect. See also
creeping
elegance. Jargon file
Software engineering (SE) has probably largest concentration of snake oil salesman after OO
programming and software architecture is far from being an exclusion. Many published software
methodologies/architectures claim to provide the benefits, that most of them can not deliver (UML is
one good example). I see a lot of oversimplification of the real situation and unnecessary (and useless)
formalisms. The main idea advocated here is simplification of software architecture (including
usage of well-understood "Pipe and Filter model") and scripting languages.
There are few quality general architectural resources available from the Net, therefore the list
below represent only some links that I am interested personally. The stress here is on skepticism and
this collection is neither complete, nor up to date. But still it might help students that are trying
to study this complex and interesting subject. Or perhaps, if you already a software architect you might
be able to expand your knowledge of the subject.
Excessive zeal in adopting some fashionable but questionable methodology is a "real and present
danger" in software engineering. This is not a new threat, it started with structured programming
revolution and then verification "holy land" searching with
Edsger W. Dijkstra as a new
prophet of an obsure cult. The main problem here that all those methodologies contain 20% of useful
elements; but the other 80% kill all the useful elements and introduce probably some real disadvantages.
After a dozen or so partially useful but mostly useless methodologies came, were enthusiastically
adopted and went into oblivion we should definitely be skeptical.
All this "extreme programming" idiotism or
CMM Lysenkoism should be treated as we treat dangerous religious sects.
It's undemocratic and stupid to prohibit them but it's equally dangerous and stupid to follow their
recommendations ;-). As Talleyrand advised to junior diplomats: "Above all, gentlemen, not too much
zeal. " By this phrase, Talleyrand was reportedly recommended to his subordinates that important
decisions must be based upon the exercise of cool-headed reason and not upon emotions or any waxing
or waning popular delusion.
One interesting fact about software architecture is that it can't be practiced from the "ivory tower".
Only when you do coding yourself and faces limitations of the tools and hardware you can create a great
architecture. See Real Insights into Architecture Come Only From
Actual Programming
One interesting fact about software architecture is that it can't be practiced from the
"ivory tower". Only when you do coding yourself and faces limitations of the tools and hardware
you can create a great architecture. SeeReal Insights
into Architecture Come Only From Actual Programming
The primary purpose of Software Architecture courses
is to teach students some higher level skills useful in designing and implementing complex software
systems. In usually includes some information about classification (general and domain specific architectures),
analysis and tools. As guys in Breadmear consulting aptly noted in their paper
Who Software Architect Role:
A simplistic view of the role is that architects create architectures,
and their responsibilities encompass all that is involved in doing so. This would include articulating
the architectural vision, conceptualizing and experimenting with alternative architectural approaches,
creating models and component and interface specification documents, and validating the architecture
against requirements and assumptions.
However, any experienced architect knows that the role involves not
just these technical activities, but others that are more political and strategic in nature on the
one hand, and more like those of a consultant, on the other. A sound sense of business and technical
strategy is required to envision the "right" architectural approach to the customer's problem set,
given the business objectives of the architect's organization. Activities in this area include the
creation of technology roadmaps, making assertions about technology directions and determining their
consequences for the technical strategy and hence architectural approach.
Further, architectures are seldom embraced without considerable challenges
from many fronts. The architect thus has to shed any distaste for what may be considered
"organizational politics", and actively work to sell the architecture to its various stakeholders,
communicating extensively and working networks of influence to ensure the ongoing success of the
architecture.
But "buy-in" to the architecture vision is not enough either. Anyone
involved in implementing the architecture needs to understand it. Since weighty architectural
documents are notorious dust-gatherers, this involves creating and teaching tutorials and actively
consulting on the application of the architecture, and being available to explain the rationale behind
architectural choices and to make amendments to the architecture when justified.
Lastly, the architect must lead--the architecture team, the developer
community, and, in its technical direction, the organization.
Again, I would like to stress that the main principle of software architecture is simple and well
known -- it's famous KISS principle. While principle is simple its implementation is not and a lot of
developers (especially developers with limited resources) paid dearly for violating this principle.
I have found one one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering,
A Practitioner's Approach, page 452. McGraw Hill, 1997. Here open source tools can help because
for those tools a complexity is not such a competitive advantage as for closed source tools. But that
not necessary true about actual tools as one problem with open source projects is change of the leader.
This is the moment when many projects lose architectural integrity and became a Byzantium compendium
of conflicting approaches.
I appreciate am architecture of software system that lead to small size implementations with simple,
Spartan interface. In these days the usage of scripting languages can cut the volume of code more than
in half in comparison with Java. That's why this site is advocating usage of scripting languages
for complex software projects.
"Real Beauty can be found in Simplicity," and as you may know already, ' "Less" sometimes equal "More".'
I continue to adhere to that philosophy. If you, too, have an eye for simplicity in software engineering,
then you might benefit from this collection of links.
I think writing a good software system is somewhat similar to writing a multivolume series of books.
Most writers will rewrite each chapter of book several times and changes general structure a lot. Rewriting
large systems is more difficult, but also very beneficial. It make sense always consider the current
version of the system a draft that can be substantially improved and simplified by discovering some
new unifying and simplifying paradigm. Sometimes you can take a wrong direction, but still "nothing
venture nothing have."
On a subsystem level a decent configuration management system can help going back. Too often people
try to write and debug their fundamentally flawed architecturally "first draft", when it would have
been much simpler and faster to rewrite it based on better understanding of architecture and better
understanding of the problem. Actually rewriting can save time spend in debugging of the old version.
That way, when you're done, you may get easy-to-understand, simple software systems, instead of just
systems that "seems to work okay" (only as correct as your testing).
On component level refactoring (see Refactoring: Improving the Design of Existing Code) might be
a useful simplification technique. Actually rewriting is a simpler term, but let's assume that refactoring
is rewriting with some ideological frosting ;-). See
Slashdot Book Reviews Refactoring
Improving the Design of Existing Code.
I have found one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering,
A Practitioner's Approach, page 452. McGraw Hill, 1997.
Another relevant work (he try to promote his own solution -- you can skip this part) is the critique
of "the technology mud slide" in a book
The Innovator's Dilemma by Harvard Business School Professor
Clayton M. Christensen
. He defined the term"technology mudslide", the concept very similar to Brooks "software
development tar pit" -- a perpetual cycle of abandonment
or retooling of existing systems in pursuit of the latest fashionable technology trend -- a
cycle in which
"Coping with the relentless onslaught of technology change was
akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've
got to stay on top of it. and if you ever once stop to catch your breath, you get buried."
The complexity caused by adopting new technology for the sake of new technology is further exacerbated
by the narrow focus and inexperience of many project leaders -- inexperience with mission-critical systems,
systems of larger scale then previously built, software development disciplines, and project management.
A Standish Group International survey recently showed that 46% of IT projects were over budget and overdue
-- and 28% failed altogether. That's normal and probably the real failures figures are higher: great
software managers and architects are rare and it is those people who determine the success of a software
project.
Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY,
JUL 18, 2021 - 09:00 PM
The progressive press had a field day with "woke" Walmart highly
publicized February decision to hikes wages for 425,000 workers to an average above $15 an
hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum
wage workers with machines - will get the same amount of airtime.
As Chain Store
Age reports , Walmart is applying artificial intelligence to the palletizing of products in
its regional distribution centers. I.e., it is replacing thousands of workers with robots.
Since 2017, the discount giant has worked with Symbotic to optimize an automated technology
solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla.,
distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is
either cross-docked or warehoused, while being moved or stored manually. When it's time for the
product to go to a store, a 53-foot trailer is manually packed for transit. After the truck
arrives at a store, associates unload it manually and place the items in the appropriate
places.
Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like
puzzle pieces using high-speed mobile robots that operate with a precision that speeds the
intake process and increases the accuracy of freight being stored for future orders. By using
dense modular storage, the solution also expands building capacity.
In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic
solution creates custom store- and aisle-ready pallets.
Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit
out-of-stocks and increasing the speed of stocking and unloading." More importantly, the
company hopes to further cut expenses and remove even more unskilled labor from its supply
chain.
This solution follows tests of similar automated warehouse solutions at a Walmart
consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter,
Calif.
Walmart plans to implement this technology in 25 of its 42 RDCs.
"Though very few Walmart customers will ever see into our warehouses, they'll still be able
to witness an industry-leading change, each time they find a product on shelves," said Joe
Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve
all the complexities of a global supply chain, but we plan to keep changing the game as we use
technology to transform the way we work and lead our business into the future."
Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY,
JUL 18, 2021 - 09:00 PM
The progressive press had a field day with "woke" Walmart highly
publicized February decision to hikes wages for 425,000 workers to an average above $15 an
hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum
wage workers with machines - will get the same amount of airtime.
As Chain Store
Age reports , Walmart is applying artificial intelligence to the palletizing of products in
its regional distribution centers. I.e., it is replacing thousands of workers with robots.
Since 2017, the discount giant has worked with Symbotic to optimize an automated technology
solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla.,
distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is
either cross-docked or warehoused, while being moved or stored manually. When it's time for the
product to go to a store, a 53-foot trailer is manually packed for transit. After the truck
arrives at a store, associates unload it manually and place the items in the appropriate
places.
Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like
puzzle pieces using high-speed mobile robots that operate with a precision that speeds the
intake process and increases the accuracy of freight being stored for future orders. By using
dense modular storage, the solution also expands building capacity.
In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic
solution creates custom store- and aisle-ready pallets.
Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit
out-of-stocks and increasing the speed of stocking and unloading." More importantly, the
company hopes to further cut expenses and remove even more unskilled labor from its supply
chain.
This solution follows tests of similar automated warehouse solutions at a Walmart
consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter,
Calif.
Walmart plans to implement this technology in 25 of its 42 RDCs.
"Though very few Walmart customers will ever see into our warehouses, they'll still be able
to witness an industry-leading change, each time they find a product on shelves," said Joe
Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve
all the complexities of a global supply chain, but we plan to keep changing the game as we use
technology to transform the way we work and lead our business into the future."
But wait: wasn't this recent rise in wages in real terms being propagandized as a new boom
for the working class in the USA by the MSM until some days ago?
And in the drive-through lane at Checkers near Atlanta, requests for Big Buford burgers and
Mother Cruncher chicken sandwiches may be fielded not by a cashier in a headset, but by a
voice-recognition algorithm.
An increase in automation, especially in service industries, may prove to be an economic
legacy of the pandemic. Businesses from factories to fast-food outlets to hotels turned to
technology last year to keep operations running amid social distancing requirements and
contagion fears. Now the outbreak is ebbing in the United States, but the difficulty in hiring
workers -- at least at the wages that employers are used to paying -- is providing new momentum
for automation.
Technological investments that were made in response to the crisis may contribute to a
post-pandemic productivity boom, allowing for higher wages and faster growth. But some
economists say the latest wave of automation could eliminate jobs and erode bargaining power,
particularly for the lowest-paid workers, in a lasting way.
"Once a job is automated, it's pretty hard to turn back," said Casey Warman, an economist at
Dalhousie University in Nova Scotia who has studied automation in the pandemic .
https://www.dianomi.com/smartads.epl?id=3533
The trend toward automation predates the pandemic, but it has accelerated at what is proving
to be a critical moment. The rapid reopening of the economy has led to a surge in demand for
waiters, hotel maids, retail sales clerks and other workers in service industries that had cut
their staffs. At the same time, government benefits have allowed many people to be selective in
the jobs they take. Together, those forces have given low-wage workers a rare moment of
leverage , leading to higher pay
, more generous benefits and other perks.
Automation threatens to tip the advantage back toward employers, potentially eroding those
gains. A
working paper published by the International Monetary Fund this year predicted that
pandemic-induced automation would increase inequality in coming years, not just in the United
States but around the world.
"Six months ago, all these workers were essential," said Marc Perrone, president of the
United Food and Commercial Workers, a union representing grocery workers. "Everyone was calling
them heroes. Now, they're trying to figure out how to get rid of them."
Checkers, like many fast-food restaurants, experienced a jump in sales when the pandemic
shut down most in-person dining. But finding workers to meet that demand proved difficult -- so
much so that Shana Gonzales, a Checkers franchisee in the Atlanta area, found herself back
behind the cash register three decades after she started working part time at Taco Bell while
in high school.
"We really felt like there has to be another solution," she said.
So Ms. Gonzales contacted Valyant AI, a Colorado-based start-up that makes voice recognition
systems for restaurants. In December, after weeks of setup and testing, Valyant's technology
began taking orders at one of Ms. Gonzales's drive-through lanes. Now customers are greeted by
an automated voice designed to understand their orders -- including modifications and special
requests -- suggest add-ons like fries or a shake, and feed the information directly to the
kitchen and the cashier.
The rollout has been successful enough that Ms. Gonzales is getting ready to expand the
system to her three other restaurants.
"We'll look back and say why didn't we do this sooner," she said.
The push toward automation goes far beyond the restaurant sector. Hotels,
retailers ,
manufacturers and other businesses have all accelerated technological investments. In a
survey of nearly 300 global companies by the World Economic Forum last year, 43 percent of
businesses said they expected to reduce their work forces through new uses of
technology.
Some economists see the increased investment as encouraging. For much of the past two
decades, the U.S. economy has struggled with weak productivity growth, leaving workers and
stockholders to compete over their share of the income -- a game that workers tended to lose.
Automation may harm specific workers, but if it makes the economy more productive, that could
be good for workers as a whole, said Katy George, a senior partner at McKinsey, the consulting
firm.
She cited the example of a client in manufacturing who had been pushing his company for
years to embrace augmented-reality technology in its factories. The pandemic finally helped him
win the battle: With air travel off limits, the technology was the only way to bring in an
expert to help troubleshoot issues at a remote plant.
"For the first time, we're seeing that these technologies are both increasing productivity,
lowering cost, but they're also increasing flexibility," she said. "We're starting to see real
momentum building, which is great news for the world, frankly."
Other economists are less sanguine. Daron Acemoglu of the Massachusetts Institute of
Technology said that many of the technological investments had just replaced human labor
without adding much to overall productivity.
In a
recent working paper , Professor Acemoglu and a colleague concluded that "a significant
portion of the rise in U.S. wage inequality over the last four decades has been driven by
automation" -- and he said that trend had almost certainly accelerated in the pandemic.
"If we automated less, we would not actually have generated that much less output but we
would have had a very different trajectory for inequality," Professor Acemoglu said.
Ms. Gonzales, the Checkers franchisee, isn't looking to cut jobs. She said she would hire 30
people if she could find them. And she has raised hourly pay to about $10 for entry-level
workers, from about $9 before the pandemic. Technology, she said, is easing pressure on workers
and speeding up service when restaurants are chronically understaffed.
"Our approach is, this is an assistant for you," she said. "This allows our employee to
really focus" on customers.
Ms. Gonzales acknowledged she could fully staff her restaurants if she offered $14 to $15 an
hour to attract workers. But doing so, she said, would force her to raise prices so much that
she would lose sales -- and automation allows her to take another course.
Rob Carpenter, Valyant's chief executive, noted that at most restaurants, taking
drive-through orders is only part of an employee's responsibilities. Automating that task
doesn't eliminate a job; it makes the job more manageable.
"We're not talking about automating an entire position," he said. "It's just one task within
the restaurant, and it's gnarly, one of the least desirable tasks."
But technology doesn't have to take over all aspects of a job to leave workers worse off. If
automation allows a restaurant that used to require 10 employees a shift to operate with eight
or nine, that will mean fewer jobs in the long run. And even in the short term, the technology
could erode workers' bargaining power.
"Often you displace enough of the tasks in an occupation and suddenly that occupation is no
more," Professor Acemoglu said. "It might kick me out of a job, or if I keep my job I'll get
lower wages."
At some businesses, automation is already affecting the number and type of jobs available.
Meltwich, a restaurant chain that started in Canada and is expanding into the United States,
has embraced a range of technologies to cut back on labor costs. Its grills no longer require
someone to flip burgers -- they grill both sides at once, and need little more than the press
of a button.
"You can pull a less-skilled worker in and have them adapt to our system much easier," said
Ryan Hillis, a Meltwich vice president. "It certainly widens the scope of who you can have
behind that grill."
With more advanced kitchen equipment, software that allows online orders to flow directly to
the restaurant and other technological advances, Meltwich needs only two to three workers on a
shift, rather than three or four, Mr. Hillis said.
Such changes, multiplied across thousands of businesses in dozens of industries, could
significantly change workers' prospects. Professor Warman, the Canadian economist, said
technologies developed for one purpose tend to spread to similar tasks, which could make it
hard for workers harmed by automation to shift to another occupation or industry.
"If a whole sector of labor is hit, then where do those workers go?" Professor Warman said.
Women, and to a lesser degree people of color, are likely to be disproportionately affected, he
added.
The grocery business has long been a source of steady, often unionized jobs for people
without a college degree. But technology is changing the sector. Self-checkout lanes have
reduced the number of cashiers; many stores have simple robots to patrol aisles for spills and
check inventory; and warehouses have become increasingly automated. Kroger in April opened a
375,000-square-foot warehouse with more than 1,000 robots that bag groceries for delivery
customers. The company is even experimenting with delivering groceries by drone.
Other companies in the industry are doing the same. Jennifer Brogan, a spokeswoman for Stop
& Shop, a grocery chain based in New England, said that technology allowed the company to
better serve customers -- and that it was a competitive necessity.
"Competitors and other players in the retail space are developing technologies and
partnerships to reduce their costs and offer improved service and value for customers," she
said. "Stop & Shop needs to do the same."
In 2011, Patrice Thomas took a part-time job in the deli at a Stop & Shop in Norwich,
Conn. A decade later, he manages the store's prepared foods department, earning around $40,000
a year.
Mr. Thomas, 32, said that he wasn't concerned about being replaced by a robot anytime soon,
and that he welcomed technologies making him more productive -- like more powerful ovens for
rotisserie chickens and blast chillers that quickly cool items that must be stored cold.
But he worries about other technologies -- like automated meat slicers -- that seem to
enable grocers to rely on less experienced, lower-paid workers and make it harder to build a
career in the industry.
"The business model we seem to be following is we're pushing toward automation and we're not
investing equally in the worker," he said. "Today it's, 'We want to get these robots in here to
replace you because we feel like you're overpaid and we can get this kid in there and all he
has to do is push this button.'"
Mission creep is the gradual or incremental expansion of an intervention, project or
mission, beyond its original scope, focus or goals , a ratchet effect spawned by initial success.
[1] Mission creep is
usually considered undesirable due to how each success breeds more ambitious interventions
until a final failure happens, stopping the intervention entirely.
"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in
the way food comes to our doors and our tables." The semiautonomous vehicles were
engineered by Kiwibot, a company started in 2017 to game-change the food delivery
landscape...
In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program
funded by the Knight Foundation. The program is driven to understand how residents and
consumers will interact with this type of technology, especially as the trend of robot
servers grows around the country.
And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped
on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and
several others, in the next couple of weeks...
"Our program is completely focused on the residents of Miami-Dade County and the way
they interact with this new technology. Whether it's interacting directly or just sharing
the space with the delivery bots,"
said Carlos Cruz-Casas, with the county's Department of Transportation...
Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are
placed on the front, back and sides of the vehicle, which the supervisors can view on a
computer screen. [A spokesperson says later in the article "there is always a remote and
in-field team looking for the robot."] If crossing the street is necessary, the robot
will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to
allow deliveries up to a mile and a half away so robots can make it to their destinations in
30 minutes or less.
Earlier Kiwi tested its sidewalk-travelling robots around the University of California at
Berkeley, where
at least one of its robots burst into flames . But the Sun-Sentinel reports that "In
about six months, at least 16 restaurants came on board making nearly 70,000
deliveries...
"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles
and Santa Monica by working with the Shopify app to connect businesses that want to employ
their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is
making it possible for Miami-Dade County restaurants to sign on for free."
A video
shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog
on a leash, and at least one potential restaurant customer looking forward to no longer
having to tip human food-delivery workers.
Average but still useful enumeration of factors what should be considered. One question stands out "Is that SaaS app really
cheaper than more headcount?" :-)
Notable quotes:
"... You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter. ..."
"... Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question. ..."
"... Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations. ..."
When introducing a new tool, programming language, or dependency into your environment, what
steps do you take to evaluate it? In this article, I will walk through a six-question framework
I use to make these determinations.
What problem am I trying to solve?
We all get caught up in the minutiae of the immediate problem at hand. An honest, critical
assessment helps divulge broader root causes and prevents micro-optimizations.
Let's say you are experiencing issues with your configuration management system. Day-to-day
operational tasks are taking longer than they should, and working with the language is
difficult. A new configuration management system might alleviate these concerns, but make sure
to take a broader look at this system's context. Maybe switching from virtual machines to
immutable containers eases these issues and more across your environment while being an
equivalent amount of work. At this point, you should explore the feasibility of more
comprehensive solutions as well. You may decide that this is not a feasible project for the
organization at this time due to a lack of organizational knowledge around containers, but
conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next
quarter.
This intellectual exercise helps you drill down to the root causes and solve core issues,
not the symptoms of larger problems. This is not always going to be possible, but be
intentional about making this decision.
Now that we have identified the problem, it is time for critical evaluation of both
ourselves and the selected tool.
A particular technology might seem appealing because it is new because you read a cool blog
post about it or you want to be the one giving a conference talk. Bells and whistles can be
nice, but the tool must resolve the core issues you identified in the first
question.
What am I giving up?
The tool will, in fact, solve the problem, and we know we're solving the right
problem, but what are the tradeoffs?
These considerations can be purely technical. Will the lack of observability tooling prevent
efficient debugging in production? Does the closed-source nature of this tool make it more
difficult to track down subtle bugs? Is managing yet another dependency worth the operational
benefits of using this tool?
Additionally, include the larger organizational, business, and legal contexts that you
operate under.
Are you giving up control of a critical business workflow to a third-party vendor? If that
vendor doubles their API cost, is that something that your organization can afford and is
willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of
proprietary information? Does the software licensing make this difficult to use
commercially?
While not simple questions to answer, taking the time to evaluate this upfront will save you
a lot of pain later on.
Is the project or vendor healthy?
This question comes with the addendum "for the balance of your requirements." If you only
need a tool to get your team over a four to six-month hump until Project X is complete,
this question becomes less important. If this is a multi-year commitment and the tool drives a
critical business workflow, this is a concern.
When going through this step, make use of all available resources. If the solution is open
source, look through the commit history, mailing lists, and forum discussions about that
software. Does the community seem to communicate effectively and work well together, or are
there obvious rifts between community members? If part of what you are purchasing is a support
contract, use that support during the proof-of-concept phase. Does it live up to your
expectations? Is the quality of support worth the cost?
Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as
well. Something might hit the front page of a news aggregator and receive attention for a few
days, but a deeper look might reveal that only a couple of core developers are actually working
on a project, and they've had difficulty finding outside contributions. Maybe a tool is open
source, but a corporate-funded team drives core development, and support will likely cease if
that organization abandons the project. Perhaps the API has changed every six months, causing a
lot of pain for folks who have adopted earlier versions.
What are the risks?
As a technologist, you understand that nothing ever goes as planned. Networks go down,
drives fail, servers reboot, rows in the data center lose power, entire AWS regions become
inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.
Ask yourself how this tooling could fail and what the impact would be. If you are adding a
security vendor product to your CI/CD pipeline, what happens if the vendor goes
down?
This brings up both technical and business considerations. Do the CI/CD pipelines simply
time out because they can't reach the vendor, or do you have it "fail open" and allow the
pipeline to complete with a warning? This is a technical problem but ultimately a business
decision. Are you willing to go to production with a change that has bypassed the security
scanning in this scenario?
Obviously, this task becomes more difficult as we increase the complexity of the system.
Thankfully, sites like k8s.af consolidate example
outage scenarios. These public postmortems are very helpful for understanding how a piece of
software can fail and how to plan for that scenario.
What are the costs?
The primary considerations here are employee time and, if applicable, vendor cost. Is that
SaaS app cheaper than more headcount? If you save each developer on the team two hours a day
with that new CI/CD tool, does it pay for itself over the next fiscal year?
Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral
if you save the dev team a couple of hours a day, but you're removing a huge blocker in their
daily workflow, and they would be much happier for it. That happiness is likely worth the
financial cost. Onboarding new developers is costly, so don't underestimate the value of
increased retention when making these calculations.
I hope you've found this framework insightful, and I encourage you to incorporate it into
your own decision-making processes. There is no one-size-fits-all framework that works for
every decision. Don't forget that, sometimes, you might need to go with your gut and make a
judgment call. However, having a standardized process like this will help differentiate between
those times when you can critically analyze a decision and when you need to make that leap.
The working assumption should "Nobody inclusing myself will ever reuse this code". It is very reastic assumption as programmers
are notoriously resultant to reuse the code from somebody elses. And you programming skills evolve you old code will look pretty
foreign to use.
"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)."
- Roberto Waltman
This week on our show we discuss this quote. Does OOP encourage too many layers in code?
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us?
Bertil Muth "¢
Dec 9 '18
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
Nested Software "¢
Dec 9 '18 "¢ Edited on Dec 16
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need
to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of
brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a
fundamental problem in software development...Nested Software "¢
Dec 9 '18
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code.
"... Main drivers of this overcomplexity are bloated states and economy dominated by corporations. Both states and corporations have IT systems today "and the complexity of those IT systems has to reflect the complexity of organisms and processes they try to cover. " ..."
Someone has sent me a link to a quite
emotional but interesting article by Tim Bray on why the world of enterprise systems delivers so many failed projects and sucky
software while the world of web startups excels at producing great software fast.
Tim makes some very valid points about technology,
culture and approach to running projects. It is true that huge upfront specs, fixed bid contracts and overall waterfall approach
are indeed culprits behind most failed IT projects, and that agile, XP and other key trends of recent years can help.
However, I don't think they can really cure the problem, because we are facing a deeper issue here: the overall overcomplexity in
our civilization.
Main drivers of this overcomplexity are bloated states and economy dominated by corporations. Both states and corporations have
IT systems today "and the complexity of those IT systems has to reflect the complexity of organisms and processes they try to cover.
"
The IT system for a national health care system or a state run compulsory social security "insurance"ť is a very good example.
It must be a complex mess because what it is trying to model and run is a complex, overbloated mess "" in most cases a constantly
changing mess. And it can't be launched early because it is useless unless it covers the whole scope of what it is supposed to do:
because most of what it covers is regulations and laws you can't deliver a system that meets half of the regulations or 10% "" it
can't be used. By the very nature of the domain the system has to be launched as a finished whole.
Plus, on top of all that, comes the scale. If you can imagine a completely privatized health care no system will ever cover all
citizens "" each doctor, hospital, insurer etc. will cover just its clients, a subset of the population. A system like NHS has to
handle all of the UK's population by design.
Same problem with corporations, especially those that have been around for long (by long I mean decades, not years): scale and
mentality. You just can't manage 75 thousand people easily, especially if they are spread around the globe, in a simple and agile
way.
Just think of all accounting requirements global corporations have to handle with their IT systems "" but this is just the tip
of the iceberg. Whole world economy floats in a sea of legislation "" legislative diarrhea of the last decades produced a legal swamp
which is a nightmare to understand let alone model a system to comply with it. For a global corporation multiply that by all the
countries it is in and stick some international regulations on top of this. This is something corporate systems have to cope with.
What is also important "" much of that overcomplexity is computer driven: it would not have been possible if not for the existence
of IT systems and computers that run them.
Take VAT tax "" it is so complex I always wonder what idiots gave the Nobel prize to the moron who invented it (well, I used to
wonder about that when Nobel prize had any credibility). Clearly, implementing it is completely impossible without computers & systems
everywhere.
Same about the legal diarrhea I mentioned "" I think it can be largely attributed to Microsoft Word. Ever wondered why the EU
Constitution (now disguised as "Lisbon Treaty"ť) has hundreds of pages while the US Constitution is simple and elegant? Well, they
couldn't have possibly written a couple hundred page document with a quill pen which forced them to produce something concise.
But going back to the key issue of whether the corporate IT systems can be better: they can, but a deeper shift in thinking is
needed. Instead of creating huge, complex systems corporate IT should rather be a cloud of simple, small systems built and maintained
to provide just one simple service (exactly what web startups are doing "" each of them provides simple a service, together they
create a complex ecosystem). However, this shift would have to occur on the organizational level too "" large organizations with
complex rules should be replaced with small, focused entities with simple rules for interaction between them.
But to get there we would need a world-wide "agile adoption"ť reaching well beyond IT. But that means a huge political change,
that is nowhere on the horizon. Unless, of course, one other enabler of our civilization's overcomplexity fades: cheap, abundant
energy.
"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in
the way food comes to our doors and our tables." The semiautonomous vehicles were
engineered by Kiwibot, a company started in 2017 to game-change the food delivery
landscape...
In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program
funded by the Knight Foundation. The program is driven to understand how residents and
consumers will interact with this type of technology, especially as the trend of robot
servers grows around the country.
And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped
on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and
several others, in the next couple of weeks...
"Our program is completely focused on the residents of Miami-Dade County and the way
they interact with this new technology. Whether it's interacting directly or just sharing
the space with the delivery bots,"
said Carlos Cruz-Casas, with the county's Department of Transportation...
Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are
placed on the front, back and sides of the vehicle, which the supervisors can view on a
computer screen. [A spokesperson says later in the article "there is always a remote and
in-field team looking for the robot."] If crossing the street is necessary, the robot
will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to
allow deliveries up to a mile and a half away so robots can make it to their destinations in
30 minutes or less.
Earlier Kiwi tested its sidewalk-travelling robots around the University of California at
Berkeley, where
at least one of its robots burst into flames . But the Sun-Sentinel reports that "In
about six months, at least 16 restaurants came on board making nearly 70,000
deliveries...
"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles
and Santa Monica by working with the Shopify app to connect businesses that want to employ
their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is
making it possible for Miami-Dade County restaurants to sign on for free."
A video
shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog
on a leash, and at least one potential restaurant customer looking forward to no longer
having to tip human food-delivery workers.
Customers wouldn't have to train the algorithm on their own boxes because the robot was made
to recognize boxes of different sizes, textures and colors. For example, it can recognize both
shrink-wrapped cases and cardboard boxes.
... Stretch is part of a growing market of warehouse robots made by companies such as 6
River Systems Inc., owned by e-commerce technology company Shopify Inc., Locus Robotics Corp. and Fetch
Robotics Inc. "We're anticipating exponential growth (in the market) over the next five years,"
said Dwight Klappich, a supply chain research vice president and fellow at tech research firm
Gartner Inc.
As fast-food restaurants and small businesses struggle to find low-skilled workers to staff
their kitchens and cash registers, America's biggest fast-food franchise is seizing the
opportunity to field test a concept it has been working toward for some time: 10 McDonald's
restaurants in Chicago are testing automated drive-thru ordering using new artificial
intelligence software that converts voice orders for the computer.
McDonald's CEO Chris Kempczinski said Wednesday during an appearance at Alliance Bernstein's
Strategic Decisions conference that the new voice-order technology is about 85% accurate and
can take 80% of drive-thru orders. The company obtained the technology during its 2019
acquisition of Apprente.
The introduction of automation and artificial intelligence into the industry will eventually
result in entire restaurants controlled without humans - that could happen as early as the end
of this decade. As for McDonald's, Kempczinski said the technology will likely take more than
one or two years to implement.
"Now there's a big leap from going to 10 restaurants in Chicago to 14,000 restaurants
across the US, with an infinite number of promo permutations, menu permutations, dialect
permutations, weather -- and on and on and on, " he said.
McDonald's is also exploring automation of its kitchens, but that technology likely won't be
ready for another five years or so - even though it's capable of being introduced soooner.
McDonald's has also been looking into automating more of the kitchen, such as its fryers
and grills, Kempczinski said. He added, however, that that technology likely won't roll out
within the next five years, even though it's possible now.
"The level of investment that would be required, the cost of investment, we're nowhere
near to what the breakeven would need to be from the labor cost standpoint to make that a
good business decision for franchisees to do," Kempczinski said.
And because restaurant technology is moving so fast, Kempczinski said, McDonald's won't
always be able to drive innovation itself or even keep up. The company's current strategy is
to wait until there are opportunities that specifically work for it.
"If we do acquisitions, it will be for a short period of time, bring it in house,
jumpstart it, turbo it and then spin it back out and find a partner that will work and scale
it for us," he said.
On Friday, Americans will receive their first broad-based update on non-farm employment in
the US since last month's report, which missed expectations by a wide margin, sparking
discussion about whether all these "enhanced" monetary benefits from federal stimulus programs
have kept workers from returning to the labor market.
Over-complexity describes a tangible or intangible entity that is more complex than it needs to be relative to its use and purpose.
Complexity can be measured as the amount of information that is required to fully document an entity. A technology that can be fully
described in 500 words is far less complex than a technology that requires at least 5 million words to fully specify. The following
are common types of over-complexity.
Accidental Complexity Accidental complexity is any complexity beyond the minimum required
to meet a need. This can be compared to essential complexity that describes the most simple solution possible for a given need and
level of quality. For example, the essential complexity for a bridge that is earthquake resistance and inexpensive to maintain might
be contained in an architectural design of 15 pages. If a competing design were to be 100 pages with the same level of quality and
functionality, this design can be considered overly complex.
Overthinking A decision making process that is overly complex
such that it is an inefficient use of time and other resources. Overthinking can also result in missed opportunities. For example,
a student who spends three years thinking about what afterschool activity they would like to join instead of just trying a few things
to see how they work out. By the time the student finally makes a decision to join a soccer team, they find the other players are
far more advanced than themselves.
Gold Plating Adding additional functions, features and quality to something that adds little
or no value. For example, a designer of an air conditioning unit who adds personalized settings for up to six individuals to the
user interface. This requires people to install an app to use the air conditioner such that users typically view the feature as an
annoyance. The feature is seldom used and some customers actively avoid the product based on reviews that criticise the feature.
The feature also adds to the development cost and unit cost of the product, making it less competitive in the market.
Big Ball
of Mud A big ball of mud is a design that is the product of many incremental changes that aren't coordinated within a common
architecture and design. A common example is a city that emerges without any building regulations or urban planning. Big ball of
mud is also common in software where developers reinvent the same services such that code becomes extremely complex relative to its
use.
Incomprehensible Communication Communication complexity is measured by how long it takes you to achieve your communication
objectives with an audience. It is common for communication to be overly indirect with language that is unfamiliar to an audience
such that little gets communicated. Communication complexity is also influenced by how interesting the audience find your speech,
text or visualization. For example, an academic who uses needlessly complex speech out of a sense of elitism or fear of being criticized
may transfer little knowledge to students with a lecture such that it can be viewed as overly complex.
Notes Over-complexity
can have value to quality of life and culture. If the world was nothing but minimized, plain functionality it would be less interesting.
"... Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". ..."
"... Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance. ..."
Anyone who claims to be even remotely versed in computer science knows what "spaghetti code" is. That type of code still sadly
exists. But today we also have, for lack of a better term" and sticking to the pasta metaphor" "lasagna code".
Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to
maintain code all in the name of "clarity". It drives me nuts to see how badly some code today is. And then you come across
how small Turbo Pascal v3 was , and after comprehending it was a
full-blown Pascal compiler, one wonders why applications and compilers today are all so massive.
Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint.
Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles
which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance.
Back when I was starting out in computer science I thought by today we'd be writing a few lines of code to accomplish much. Instead,
we write hundreds of thousands of lines of code to accomplish little. It's so sad it's enough to make one cry, or just throw your
hands in the air in disgust and walk away.
There are bright spots. There are people out there that code small and beautifully. But they're becoming rarer, especially when
someone who seemed to have thrived on writing elegant, small, beautiful code recently passed away. Dennis Ritchie understood you
could write small programs that did a lot. He comprehended that the algorithm is at the core of what you're trying to accomplish.
Create something beautiful and well thought out and people will examine it forever, such as
Thompson's version of Regular Expressions !
"... Stephen Hawking predicted this would be " the century of complexity ." He was talking about theoretical physics, but he was dead right about technology... ..."
"... Any human mind can only encompass so much complexity before it gives up and starts making slashing oversimplifications with an accompanying risk of terrible mistakes. ..."
...Stephen Hawking predicted this would be "
the century of complexity ." He was talking about theoretical physics, but he was dead
right about technology...
Let's try to define terms. How can we measure complexity? Seth Lloyd of MIT, in a paper which drily
begins "The world has grown more complex recently, and the number of ways of measuring
complexity has grown even faster," proposed three key categories: difficulty of description,
difficulty of creation, and degree of organization. Using those three criteria, it seems
apparent at a glance that both our societies and our technologies are far more complex than
they ever have been, and rapidly growing even moreso.
The thing is, complexity is the enemy. Ask any engineer "¦ especially a security
engineer. Ask the ghost of Steve Jobs. Adding complexity to solve a problem may bring a
short-term benefit, but it invariably comes with an ever-accumulating long-term cost. Any
human mind can only encompass so much complexity before it gives up and starts making slashing
oversimplifications with an accompanying risk of terrible mistakes.
You may have noted that those human minds empowered to make major decisions are often those
least suited to grappling with nuanced complexity. This itself is arguably a lingering effect
of growing complexity. Even the simple concept of democracy has grown highly complex" party
registration, primaries, fundraising, misinformation, gerrymandering, voter rolls, hanging
chads, voting machines" and mapping a single vote for a representative to dozens if not
hundreds of complex issues is impossible, even if you're willing to consider all those issues
in depth, which most people aren't.
Complexity
theory is a rich field, but it's unclear how it can help with ordinary people trying to
make sense of their world. In practice, people deal with complexity by coming up with
simplified models close enough to the complex reality to be workable. These models can be
dangerous" "everyone just needs to learn to code," "software does the same thing every time it
is run," "democracies are benevolent"" but they were useful enough to make fitful progress.
In software, we at least recognize this as a problem. We pay lip service to the glories of
erasing code, of simplifying functions, of eliminating side effects and state, of deprecating
complex APIs, of attempting to scythe back the growing thickets of complexity. We call
complexity "technical debt" and realize that at least in principle it needs to be paid down
someday.
"Globalization should be conceptualized as a series of adapting and co-evolving global
systems, each characterized by unpredictability, irreversibility and co-evolution. Such systems
lack finalized "˜equilibrium' or "˜order'; and the many pools of order heighten
overall disorder," to
quote the late John Urry. Interestingly, software could be viewed that way as well,
interpreting, say, "the Internet" and "browsers" and "operating systems" and "machine learning"
as global software systems.
Software is also something of a best possible case for making complex things simpler. It is
rapidly distributed worldwide. It is relatively devoid of emotional or political axegrinding.
(I know, I know. I said "relatively.") There are reasonably objective measures of performance
and simplicity. And we're all at least theoretically incentivized to simplify it.
So if we can make software simpler" both its tools and dependencies, and its actual end
products" then that suggests we have at least some hope of keeping the world simple enough such
that crude mental models will continue to be vaguely useful. Conversely, if we can't, then it
seems likely that our reality will just keep growing more complex and unpredictable, and we
will increasingly live in a world of whole flocks of black swans. I'm not sure whether to be
optimistic or not. My mental model, it seems, is failing me.
Since the dawn of time (before software, there was only darkness), there has been one
constant: businesses want to build software cheaper and faster.
It is certainly an understandable and laudable goal especially if you've spent any time
around software developers. It is a goal that every engineer should support wholeheartedly, and
we should always strive to create things as efficiently as possible, given the constraints of
our situation.
However, the truth is we often don't. It's not intentional, but over time, we get waylaid by
unforeseen complexities in building software and train ourselves to seek out edge cases,
analysis gaps, all of the hidden repercussions that can result from a single bullet point of
requirements.
We get enthralled by the maelstrom of complexity and the mental puzzle of engineering
elegant solutions: Another layer of abstraction! DRY it up! Separate the concerns! Composition
over inheritance! This too is understandable, but in the process, we often lose sight of the
business problems being solved and forget that managing complexity is the second most
important responsibility of software developers.
So how did we get here?
Software has become easier in certain ways.
Over the last few decades, our industry has been very successful at reducing the amount of
custom code it takes to write most software.
Much of this reduction has been accomplished by making programming languages more
expressive. Languages such as Python, Ruby, or JavaScript can take as little as one third as
much code as C in order to implement similar functionality. C gave us similar advantages
over writing in assembler. Looking forward to the future, it is unlikely that language design
will give us the same kinds of improvements we have seen over the last few decades.
But reducing the amount of code it takes to build software involves many other avenues that
don't require making languages more expressive. By far the biggest gain we have made in this
over the last two decades is open source software (OSS). Without individuals and companies
pouring money into software that they give freely to the community, much of what we build today
wouldn't be possible without an order of magnitude more cost and effort.
These projects have allowed us to tackle problems by standing on the shoulders of giants,
leveraging tools to allow us to focus more of our energy on actually solving business problems,
rather than spending time building infrastructure.
That said, businesses are complex. Ridiculously complex and only getting moreso. OSS is
great for producing frameworks and tools that we can use to build systems on top of, but for
the most part, OSS has to tackle problems shared by a large number of people in order to gain
traction. Because of that, most open source projects have to either be relatively generic or be
in a very popular niche. Therefore, most of these tools are great platforms on which to build
out systems, but at the end of the day, we are still left to build all of the business logic
and interfaces in our increasingly complex and demanding systems.
So what we are left with is a stack that looks something like this (for a web
application)"¦
That "Our Code" part ends up being enormously complex, since it mirrors the business and its
processes. If we have custom business logic, and custom processes, then we are left to build
the interfaces, workflow, and logic that make up our applications. Sure, we can try to find
different ways of recording that logic (remember business rules engines?), but at the end of
the day, no one else is going to write the business logic for your business. There really
doesn't seem to be a way around that"¦ at least not until the robots come and save us
all from having to do any work.
Don't like code, well how about Low-Code?
So if we have to develop the interfaces, workflow, and logic that make up our applications,
then it sounds like we are stuck, right? To a certain extent, yes, but we have a few
options.
To most developers, software equals code, but that isn't reality. There are many ways to
build software, and one of those ways is through using visual tools. Before the web, visual
development and RAD tools had a much bigger place in the market. Tools like PowerBuilder,
Visual Foxpro, Delphi, VB, and Access all had visual design capabilities that allowed
developers to create interfaces without typing out any code.
These tools spanned the spectrum in terms of the amount of code you needed to write, but in
general, you designed your app visually and then ended up writing a ton of code to implement
the logic of your app. In many cases you still ended up programmatically manipulating the
interface, since interfaces built using these tools often ended up being very static. However,
for a huge class of applications, these tools allowed enormous productivity gains over the
alternatives, mostly at the cost of flexibility.
The prevalence of these tools might have waned since the web took over, but companies'
desire for them has not, especially since the inexorable march of software demand continues.
The latest trend that is blowing across the industry is "low code" systems. Low code
development tools are a modern term put on the latest generation of drag and drop software
development tools. The biggest difference between these tools and their brethren from years
past is that they are now mostly web (and mobile) based and are often hosted platforms in the
cloud.
And many companies are jumping all over these platforms. Vendors like Salesforce (App
Cloud), Outsystems, Mendix, or Kony are promising the ability to create applications many times
faster than "traditional" application development. While many of their claims are probably
hyperbole, there likely is a bit of truth to them as well. For all of the downsides of
depending on platforms like these, they probably do result in certain types of applications
being built faster than traditional enterprise projects using .NET or Java.
So, what is
the problem?
Well, a few things. First is that experienced developers often hate these tools. Most
Serious Developersâ„¢ like to write Real Softwareâ„¢ with
Real Codeâ„¢. I know that might sound like I'm pandering to a bunch of whiney
babies (and maybe I am a bit), but if the core value you deliver is technology, it is rarely a
good idea to adopt tools that your best developers don't want to work with.
Second is that folks like me look at these walled platforms and say "nope, not building my
application in there." That is a legitimate concern and the one that bothers me the most.
If you built an application a decade ago with PHP, then that application might be showing
its age, but it could still be humming along right now just fine. The language and ecosystem
are open source, and maintained by the community. You'll need to keep your application up to
date, but you won't have to worry about a vendor deciding it isn't worth their time to support
you anymore.
"¦folks like me look at these walled platforms and say "nope, not building my
application in there." That is a legitimate concern and the one that bothers me the most.
If you picked a vendor 10 years ago who had a locked down platform, then you might be forced
into a rewrite if they shut down or change their tooling too much ( remember
Parse? ). Or even worse, your system gets stuck on a platforms that freezes and no longer
serves your needs.
There are many reasons to be wary of these types of platforms, but for many businesses, the
allure of creating software with less effort is just too much to pass up. The complexity of
software continues on, and software engineers unfortunately aren't doing ourselves any favors
here.
What needs to change?
There are productive platforms out there, that allow us to build Real
Softwareâ„¢ with Real Codeâ„¢, but unfortunately our industry
right now is far too worried with following the lead of the big tech giants to realize that
sometimes their tools don't add a lot of value to our projects.
I can't tell you the number of times I've had a developer tell me that building something as
a single page application (SPA) adds no overhead versus just rendering HTML. I've heard
developers say that every application should be written on top of a NoSQL datastore, and that
relational databases are dead. I've heard developers question why every application isn't
written using CQRS and Event Sourcing.
It is that kind of thought process and default overhead that is leading companies to
conclude that software development is just too expensive. You might say, "But event sourcing is
so elegant! Having a SPA on top of microservices is so clean!" Sure, it can be, but not when
you're the person writing all ten microservices. It is that kind of additional complexity that
is often so unnecessary .
We, as an industry, need to find ways to simplify the process of building software, without
ignoring the legitimate complexities of businesses. We need to admit that not every application
out there needs the same level of interface sophistication and operational scalability as
Gmail. There is a whole world of apps out there that need well thought-out interfaces,
complicated logic, solid architectures, smooth workflows, etc"¦. but don't need
microservices or AI or chatbots or NoSQL or Redux or Kafka or Containers or whatever the tool
dujour is.
A lot of developers right now seem to be so obsessed with the technical wizardry of it all
that they can't step back and ask themselves if any of this is really needed.
It is like the person on MasterChef who comes in and sells themselves as the molecular
gastronomist. They separate ingredients into their constituent parts, use scientific methods of
pairing flavors, and then apply copious amounts of CO2 and liquid nitrogen to produce the most
creative foods you've ever seen. And then they get kicked off after an episode or two because
they forget the core tenet of most cooking, that food needs to taste good. They seem genuinely
surprised that no one liked their fermented fennel and mango-essence pearls served over cod
with anchovy foam.
Our obsession with flexibility, composability, and cleverness is causing us a lot of pain
and pushing companies away from the platforms and tools that we love. I'm not saying those
tools I listed above don't add value somewhere; they arose in response to real pain points,
albeit typically problems encountered by large companies operating systems at enormous
scale.
What I'm saying is that we need to head back in the direction of simplicity and start
actually creating things in a simpler way, instead of just constantly talking about simplicity.
Maybe we can lean on more integrated tech stacks to provide out of the box patterns and
tools to allow software developers to create software more efficiently.
"¦we are going to push more and more businesses into the arms of "low code"
platforms and other tools that promise to reduce the cost of software by dumbing it down and
removing the parts that brought us to it in the first place.
We need to stop pretending that our 20th line-of-business application is some unique
tapestry that needs to be carefully hand-sewn.
Staying Focused on Simplicity
After writing that, I can already hear a million developers sharpening their pitchforks, but
I believe that if we keep pushing in the direction of wanting to write everything, configure
everything, compose everything, use the same stack for every scale of problem, then we are
going to push more and more businesses into the arms of "low code" platforms and other tools
that promise to reduce the cost of software by dumbing it down and removing the parts that
brought us to it in the first place.
Our answer to the growing complexity of doing business cannot be adding complexity to the
development process "" no matter how elegant it may seem.
We must find ways to manage complexity by simplifying the development process. Because even
though managing complexity is our second most important responsibility, we must always remember
the most important responsibility of software developers: delivering value through working
software.
Common situations are: " lack of control leading to unbounded growth " lack of
predictability, leading to unbounded cost " lack of long term perspective, leading to
ill-informed decisions
complex software is the enemy of quality
Complicated = many interrelated parts " linear: small change = small impact "
predictable: straight flow, local failure " decomposable: manageable
Complex = unpredictable & hard to manage " emergent: whole is more than sum
" non-linear: small change = big impact? " cascading failure " hysteresis:
you must understand its history " indivisible
" Refactoring is improving internal quality " reducing complexity "
without changing functionality.
In the pharmaceutical industry, accuracy and attention to detail are important. Focusing on these things is easier with simplicity,
yet in the pharmaceutical industry overcomplexity is common, which can lead to important details getting overlooked. However, many
companies are trying to address this issue.
In fact, 76% of pharmaceutical
execs believe that reducing complexity leads to sustainable cost reductions. Read on for some of the ways that overcomplexity
harms pharmaceutical companies and what is being done to remedy it.
1. What Students in Pharmaceutical Manufacturing Training Should Know About Overcomplexity's Origins
Overcomplexity is when a system, organization, structure or process is unnecessarily difficult to analyze, solve or make sense
of. In pharmaceutical companies, this is a major issue and hindrance to the industry as a whole. Often, overcomplexity is the
byproduct of innovation and progress, which, despite their obvious advantages, can lead to an organization developing too many moving
parts.
For example, new forms of collaboration as well as scientific innovation can cause overcomplexity because any time something is
added to a process, it becomes more complex. Increasing regulatory scrutiny can also add complexity, as this feedback can focus on
symptoms rather than the root of an issue.
2. Organizational Overhead Can Lead to Too Much Complexity
Organizational complexity occurs when too many personnel are added, in particular department heads. After
pharmaceutical manufacturing training
you will work on teams that can benefit from being lean. Increasing overhead is often done to improve data integrity. For example,
if a company notices an issue with data integrity, they often create new roles for overseeing data governance.
Any time personnel are added for oversight, there is a risk of increased complexity at shop floor level. Fortunately, some companies
are realizing that the best way to deal with issues of data integrity is by improving data handling within departments themselves,
rather than adding new layers of overhead""and complexity.
3. Quality Systems Can Create a Backlog
A number of pharmaceutical sites suffer from a backlog of Corrective and Preventive Actions (CAPAs). CAPAs are in place to improve
conformities and quality and they follow the Good Manufacturing Practices you know about from
pharmaceutical manufacturing courses
. However, many of these sit open until there are too many of them to catch up.
Backlog that is
close to 10 percent of the total number of investigations per year points to a serious issue with the company's system. Some
companies are dealing with this backlog by introducing a risk-based, triaged approach. Triaging allows companies to focus on the
most urgent deviations and CAPAs, thus reducing this key issue of overcomplexity in the pharmaceutical industry.
4. Pharmaceutical Manufacturing Diploma Grads Should Know What Can Help
Some strategies are being adapted to address the root problems of overcomplexity. Radical simplification, for example, is a way
to target what is fundamentally wrong with overly complex organizations and structures. This is a method of continuously improving
data and performance that focuses on improving processes.
Cognitive load deduction is another way to reduce complexity, which looks at forms and documents and attempts to reduce the effort
used when working with them. In reducing the effort required to perform tasks and fill out forms, more can be accomplished by a team.
Finally, auditors can help reduce complexity by assessing the health of a company's quality systems, such as assessing how many
open CAPAs exist. Understanding these different solutions to overcomplexity could help you excel in your career after your courses.
4.0 out of 5 stars
Everyone is on a learning curve Reviewed in the United States on February 3, 2009 The author was a programmer before, so in
writing this book, he draw both from his personal experience and his observation to depict the software world.
I think this is more of a practice and opinion book rather than "Philosophy" book, however I have to agree with him in most
cases.
For example, here is Mike Gancarz's line of thinking:
1. Hard to get the s/w design right at the first place, no matter who.
2. So it's better to write a short specs without considering all factors first.
3. Build a prototype to test the assumptions
4. Use an iterative test/rewrite process until you get it right
5. Conclusion: Unix evolved from a prototype.
In case you are curious, here are the 9 tenets of Unix/Linux:
1. Small is beautiful.
2. Make each program do one thing well.
3. Build a prototype as soon as possible.
4. Choose portability over efficiency.
5. Store data in flat text files.
6. Use software leverage to your advantage.
7. Use shell scripts to increase leverage and portability.
8. Avoid captive user interfaces.
9. Make every program a filter.
Mike Gancarz told a story like this when he argues "Good programmers write good code; great programmers borrow good code".
"I recall a less-than-top-notch software engineer who couldn't program his way out of a paper bag. He had a knack, however,
for knitting lots of little modules together. He hardly ever wrote any of them himself, though. He would just fish around in the
system's directories and source code repositories all day long, sniffing for routines he could string together to make a complete
program. Heaven forbid that he should have to write any code. Oddly enough, it wasn't long before management recognized him as
an outstanding software engineer, someone who could deliver projects on time and within budget. Most of his peers never realized
that he had difficulty writing even a rudimentary sort routine. Nevertheless, he became enormously successful by simply using
whatever resources were available to him."
If this is not clear enough, Mike also drew analogies between Mick Jagger and Keith Richards and Elvis. The book is full of
inspiring stories to reveal software engineers' tendencies and to correct their mindsets.
I've found a disturbing
trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way
it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the
software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes
Ubuntu Unity ,
Gnome Shell ,
KDE 4 , the
/usr partition ,
SELinux ,
PolicyKit ,
Systemd ,
udev and
PulseAudio , to name a few.
The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain
Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from
that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.
Clearly this situation is untenable.
The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no
sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed
more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers,
or at least more of the existing developers to adopt the right methods.
This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users,
with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a
large influx of users who were, by and large, purely consumers, not contributors.
As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that
entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated,
consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their
belligerently hostile attitudes toward the GPL, FSF,
GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy
predilection for proprietary software, meaningless aesthetics and hype.
Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated
into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is
only too happy
to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard
Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous
monolithic blob , and the latter of whom successfully managed
to undermine 40 years of UNIX security in a single stroke, by
obliterating the principle that unprivileged
users should not be allowed to install software system-wide.
GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true
when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations,
misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.
Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts
of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently
set about stuffing GNU/Linux with as much bloated, patent
encumbered garbage as they could muster.
I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight,
and we were largely successful. Some have even suggested that my
diatribes ,
articles and
debates (with Miguel
de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.
Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes
sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for
something that's free (in the commercial sense).
Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support
contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform,
for example).
"... The author instincts on sysadmin related issues are mostly right: he is suspicious about systemd and another perversions in modern Linuxes, he argues for simplicity in software, and he warns us about PHBs problem in IT departments, points out for the importance of documentation. etc. ..."
"... maybe it is the set of topics that the author discusses is the main value of the book. ..."
"... in many cases, the right solution is to avoid those subsystems or software packages like the plague and use something simpler. Recently, avoiding Linux flavors with systemd also can qualify as a solution ;-) ..."
"... For example, among others, the author references a rare and underappreciated, but a very important book "Putt's Law and the Successful Technocrat: How to Win in the Information Age by Archibald Putt (2006-04-28)". From which famous Putt's Law "Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand," and Putt's Corollary: "Every technical hierarchy, in time, develop a competence inversion" were originated. This reference alone is probably worth half-price of the book for sysadmins, who never heard about Putt's Law. ..."
"... Linux (as of monstrous RHEL 7 with systemd, network manager and other perversions, which raised the complexity of the OS at least twice) became a way to complex for a human brain. It is impossible to remember all the important details and lessons learned from Internet browsing, your SNAFU and important tickets. Unless converted into private knowledgebase, most of such valuable knowledge disappears, say, in six months or so. And the idea of using corporate helpdesk as a knowledge database is in most cases a joke. ..."
Some valuable tips. Can serve as fuel for your own thoughts.
This book is most interesting probably for people who can definitely do well without it – seasoned sysadmins and educators.
Please ignore the word "philosophy" in the title. Most sysadmins do not want to deal with "philosophy";-). And this book does
not rise to the level of philosophy in any case. It is just collection of valuable (and not so valuable) tips from the author
career as a sysadmin of a small lab, thinly dispersed in 500 pages. Each chapter can serve as a fuel for your own thoughts.
The author instincts on sysadmin related issues are mostly right: he is suspicious about systemd and another perversions in
modern Linuxes, he argues for simplicity in software, and he warns us about PHBs problem in IT departments, points out for the
importance of documentation. etc.
In some cases, I disagreed with the author, or view his treatment of the topic as somewhat superficial, but still, his points
created the kind of "virtual discussion" that has a value of its own. And maybe it is the set of topics that the author discusses
is the main value of the book.
I would classify this book as "tips" book when the author shares his approach to this or that problem (sometimes IMHO wrong,
but still interesting ;-), distinct from the more numerous and often boring, but much better-selling class of "how to" books.
The latter explains in gory details how to deal with a particular complex Unix/Linux subsystem, or a particular role (for example
system administrator of Linux servers). But in many cases, the right solution is to avoid those subsystems or software packages
like the plague and use something simpler. Recently, avoiding Linux flavors with systemd also can qualify as a solution ;-)
This book is different. It is mostly about how to approach some typical system tasks, which arise on the level of a small lab
(that the lab is small is clear from the coverage of backups). The author advances an important idea of experimentation as a way
of solving the problem and optimizing your existing setup and work habits.
The book contains an overview of good practices of using some essential sysadmin tools such as screen and sudo. In the last
chapter, the author even briefly mentions (just mentions) a very important social problem -- the problem micromanagers. The latter
is real cancer in Unix departments of large corporations (and not only in Unix departments)
All chapters contain "webliography" at the end adding to the value of the book. While Kindle version of the book is badly formatted
for PC (but is OK on Samsung 10" tablet; I would recommend to this it for reading instead), the references in Kindle version are
clickable. And reading them them along with reading the book, including the author articles at opensource.com enhance the book
value greatly.
For example, among others, the author references a rare and underappreciated, but a very important book "Putt's Law and
the Successful Technocrat: How to Win in the Information Age by Archibald Putt (2006-04-28)". From which famous Putt's Law "Technology
is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand,"
and Putt's Corollary: "Every technical hierarchy, in time, develop a competence inversion" were originated. This reference alone
is probably worth half-price of the book for sysadmins, who never heard about Putt's Law.
Seasoned sysadmins can probably just skim Part I-III (IMHO those chapters are somewhat simplistic. ) For example, you
can skip Introduction to author's Linux philosophy, his views on contribution to open source, and similar chapters that contain
trivial information ). I would start reading the book from Part IV (Becoming Zen ), which consist of almost a dozen interesting
topics. Each of them is covered very briefly (which is a drawback). But they can serve as starters for your own thought process
and own research. The selection of topics is very good and IMHO constitutes the main value of the book.
For example, the author raises a very important issue in his chapter 20: Document Everything, but unfortunately, this chapter
is too brief, and he does not address the most important thing: sysadmin should work on some way to organize your personal knowledge.
For example as a private website. Maintenances of such a private knowledgebase is a crucial instrument of any Linux sysadmin worth
his/her salary and part of daily tasks worth probably 10% of sysadmin time. The quote "Those who cannot learn from history are
doomed to repeat it" has a very menacing meaning in sysadmin world.
Linux (as of monstrous RHEL 7 with systemd, network manager and other perversions, which raised the complexity of the OS
at least twice) became a way to complex for a human brain. It is impossible to remember all the important details and lessons
learned from Internet browsing, your SNAFU and important tickets. Unless converted into private knowledgebase, most of such valuable
knowledge disappears, say, in six months or so. And the idea of using corporate helpdesk as a knowledge database is in most cases
a joke.
The negative part of the book is that the author spreads himself too thin and try to cover too much ground. That means that
treatment of most topics became superficial. Also provided examples of shell scripts is more of a classic shell style, not Bash
4.x type of code. That helps portability (if you need it) but does not allow to understand new features of bash 4.x. Bash is available
now on most Unixes, such as AIX, Solaris and HP-UX and that solves portability issues in a different, and more productive, way.
Portability was killed by systemd anyway unless you want to write wrappers for systemctl related functions ;-)
For an example of author writing, please search for his recent (Oct 30, 2018) article "Working with data streams on the Linux
command line" That might give you a better idea of what to expect.
In my view, the book contains enough wisdom to pay $32 for it (Kindle edition price), especially if your can do it at company
expense :-). The book is also valuable for educators. Again, the most interesting part is part IV:
Part IV: Becoming Zen 325
Chapter 17: Strive for Elegance 327
Hardware Elegance 327
ThePC8 328
Motherboards 328
Computers 329
Data Centers 329
Power and Grounding 330
Software Elegance 331
Fixing My Web Site 336
Removing Crutt 338
Old or Unused Programs 338
Old Code In Scripts 342
Old Files 343
A Final Word 350
Chapter 18: Find the Simplicity 353
Complexity in Numbers 353
Simplicity In Basics 355
The Never-Ending Process of Simplification 356
Simple Programs Do One Thing 356
Simple Programs Are Small 359
Simplicity and the Philosophy 361
Simplifying My Own Programs 361
Simplifying Others' Programs 362
Uncommented Code 362
Hardware 367
Linux and Hardware 368
The Quandary. 369
The Last Word
Chapter 19: Use Your Favorite Editor 371
More Than Editors 372
Linux Startup 372
Why I Prefer SystemV 373
Why I Prefer systemd 373
The Real Issue 374
Desktop 374
sudo or Not sudo 375
Bypass sudo 376
Valid Uses for sudo 378
A Few Closing Words 379
Chapter 20: Document Everything 381
The Red Baron 382
My Documentation Philosophy 383
The Help Option 383
Comment Code Liberally 384
My Code Documentation Process 387
Man Pages 388
Systems Documentation 388
System Documentation Template 389
Document Existing Code 392
Keep Docs Updated 393
File Compatibility 393
A Few Thoughts 394
Chapter 21: Back Up Everything - Frequently 395
Data Loss 395
Backups to the Rescue 397
The Problem 397
Recovery 404
Doing It My Way 405
Backup Options 405
Off-Site Backups 413
Disaster Recovery Services 414
Other Options 415
What About the "Frequently" Part? 415
Summary 415
Chapter 22: Follow Your Curiosity 417
Charlie 417
Curiosity Led Me to Linux 418
Curiosity Solves Problems 423
Securiosity 423
Follow Your Own Curiosity 440
Be an Author 441
Failure Is an Option 441
Just Do It 442
Summary 443
Chapter 23: There Is No Should 445
There Are Always Possibilities 445
Unleashing the Power 446
Problem Solving 447
Critical Thinking 449
Reasoning to Solve Problems 450
Integrated Reason 453
Self-Knowledge 455
Finding Your Center 455
The Implications of Diversity 456
Measurement Mania 457
The Good Manager 458
Working Together 458
Silo City „..460
The Easy Way 461
Thoughts 462
Chapter 24: Mentor the Young SysAdmins 463
Hiring the Right People 464
Mentoring 465
BRuce the Mentor 466
The Art of Problem Solving 467
The Five Steps ot Problem Solving 467
Knowledge 469
Observation 469
Reasoning 472
Action 473
Test 473
Example 474
Iteration 475
Concluding Thoughts 475
Chapter 25: Support Your Favorite Open Source Project 477
Project Selection 477
Code 478
Test 479
Submit Bug Reports 479
Documentation 480
Assist 481
Teach 482
Write 482
Donate 483
Thoughts 484
Chapter 26: Reality Bytes 485
People 485
The Micromanager 486
More Is Less 487
Tech Support Terror 488
You Should Do It My Way 489
It's OK to Say No 490
The Scientific Method 490
Understanding the Past 491
Final Thoughts 492
I've seen many infrastructures in my day. I work for a company with a very complicated infrastructure now. They've got a dev/stage/prod
environment for every product (and they've got many of them). Trust is not a word spoken lightly here. There is no 'trust' for even
sysadmins (I've been working here for 7 months now and still don't have production sudo access). Developers constantly complain about
not having the access that they need to do their jobs and there are multiple failures a week that can only be fixed by a small handful
of people that know the (very complex) systems in place. Not only that, but in order to save work, they've used every cutting-edge
piece of software that they can get their hands on (mainly to learn it so they can put it on their resume, I assume), but this causes
more complexity that only a handful of people can manage. As a result of this the site uptime is (on a good month) 3 nines at best.
In my last position (pronto.com) I put together an infrastructure that any idiot could maintain. I used unmanaged switches behind
a load-balancer/firewall and a few VPNs around to the different sites. It was simple. It had very little complexity, and a new sysadmin
could take over in a very short time if I were to be hit by a bus. A single person could run the network and servers and if the documentation
was lost, a new sysadmin could figure it out without much trouble.
Over time, I handed off my ownership of many of the Infrastructure components to other people in the operations group and of course,
complexity took over. We ended up with a multi-tier network with bunches of VLANs and complexity that could only be understood with
charts, documentation and a CCNA. Now the team is 4+ people and if something happens, people run around like chickens with their
heads cut off not knowing what to do or who to contact when something goes wrong.
Complexity kills productivity. Security is inversely proportionate to usability. Keep it simple, stupid. These are all rules to
live by in my book.
Downtimes: Beatport: not unlikely to have 1-2 hours downtime for the main site per month.
Pronto: several 10-15 minute outages a year Pronto (under my supervision): a few seconds a month (mostly human error though, no
mechanical failure)
John Waclawsky (from Cisco's mobile solutions group), coined the term S4 for "Systems
Standards Stockholm Syndrome" - like hostages becoming attached to their captors, systems
standard participants become wedded to the process of setting standards for the sake of
standards.
"... The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking. ..."
"... What causes S4? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment. ..."
This was sent to me by a colleague. From "S4 -- The System Standards Stockholm
Syndrome" by John G. Waclawsky, Ph.D.:
The "Stockholm Syndrome" describes the behavior of some hostages. The "System
Standards Stockholm Syndrome" (S4) describes the behavior of system standards
participants who, over time, become addicted to technology complexity and hostages of
group thinking.
12:45 PM -- While we flood you with IMS-related content this week, perhaps it's sensible to
share some airtime with a clever warning about being held "captive" to the hype.
This warning comes from John G. Waclawsky, PhD, senior technical staff, Wireless Group,
Cisco Systems Inc. (Nasdaq: CSCO).
Waclawsky, writing in the July issue of Business Communications Review , compares the fervor over
IMS to the " Stockholm Syndrome ," a term that
comes from a 1973 hostage event in which hostages became sympathetic to their captors.
Waclawsky says a form of the Stockholm Syndrome has taken root in technical standards
groups, which he calls "System Standards Stockholm Syndrome," or S4.
Here's a snippet from Waclawsky's column:
What causes S4? Captives identify with their captors initially as a defensive
mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors,
such as granting a secretarial role (often called a "chair") to a captive in a working
group are magnified, since finding perspective in a systems standards meeting, just like
a hostage situation, is by definition impossible. Rescue attempts are problematic, since
the captive could become mentally incapacitated by suddenly being removed from a
codependent environment.
The full article can be found here -- R. Scott Raynovich, US
Editor, Light Reading
Sunday, August 07, 2005S4 - The Systems Standards Stockholm
Syndrome John Waclawsky, part of the Mobile Wireless Group at Cisco Systems, features an
interesting article in the July 2005 issue of the Business Communications Review on The Systems Standards Stockholm Syndrome.
Since his responsibilities include standards activities (WiMAX, IETF, OMA, 3GPP and TISPAN),
identification of product requirements and the definition of mobile wireless and broadband
architectures, he seems to know very well what he is talking about, namely the IP Multimedia
Subsytem (IMS). See also his article in the June 2005 issue on IMS 101 - What You Need To Know Now .
See also the Wikedpedia glossary from Martin
below:
IMS. Internet Monetisation System . A minor adjustment to Internet Protocol to add a
"price" field to packet headers. Earlier versions referred to Innovation Minimisation
System . This usage is now deprecated. (Expected release Q2 2012, not available in all
markets, check with your service provider in case of sudden loss of unmediated
connectivity.)
It is so true that I have to cite it completely (bold emphasis added):
The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards
Stockholm Syndrome" (S 4 ) describes the behavior of system standards participants
who, over time, become addicted to technology complexity and hostages of group thinking.
Although the original name derives from a 1973 hostage incident in Stockholm, Sweden, the
expanded name and its acronym, S 4 , applies specifically to systems standards
participants who suffer repeated exposure to cult dogma contained in working group documents
and plenary presentations. By the end of a week in captivity, Stockholm Syndrome victims may
resist rescue attempts, and afterwards refuse to testify against their captors. In system
standards settings, S4 victims have been known to resist innovation and even refuse to
compete against their competitors.
Recent incidents involving too much system standards attendance have resulted in people
being captured by radical ITU-like factions known as the 3GPP or 3GPP2.
I have to add of course ETSI TISPAN and it seems that the syndrome is also spreading into
IETF, especially to SIP and SIPPING.
The victims evolve to unwitting accomplices of the group as they become immune to the
frustration of slow plodding progress, thrive on complexity and slowly turn a blind eye to
innovative ideas. When released, they continue to support their captors in filtering out
disruptive innovation, and have been known to even assist in the creation and perpetuation of
bureaucracy.
Years after intervention and detoxification, they often regret their system standards
involvement. Today, I am afraid that S 4 cases occur regularly at system standards
organizations.
What causes S 4 ? Captives identify with their captors initially as a defensive
mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors,
such as granting a secretarial role (often called a "chair") to a captive in a working group
are magnified, since finding perspective in a systems standards meeting, just like a hostage
situation, is by definition impossible. Rescue attempts are problematic, since the captive
could become mentally incapacitated by suddenly being removed from a codependent
environment.
It's important to note that these symptoms occur under tremendous emotional and/or
physical duress due to lack of sleep and abusive travel schedules. Victims of S 4
often report the application of other classic "cult programming" techniques, including:
The encouraged ingestion of mind-altering substances. Under the influence of alcohol,
complex systems standards can seem simpler and almost rational.
"Love-fests" in which victims are surrounded by cultists who feign an interest in them
and their ideas. For example, "We'd love you to tell us how the Internet would solve this
problem!"
Peer pressure. Professional, well-dressed individuals with standing in the systems
standards bureaucracy often become more attractive to the captive than the casual sorts
commonly seen at IETF meetings.
Back in their home environments, S 4 victims may justify continuing their
bureaucratic behavior, often rationalizing and defending their system standard tormentors,
even to the extent of projecting undesirable system standard attributes onto component
standards bodies. For example, some have been heard murmuring, " The IETF is no picnic and
even more bureaucratic than 3GPP or the ITU, " or, "The IEEE is hugely political." (For more
serious discussion of component and system standards models, see " Closed Architectures, Closed Systems And
Closed Minds ," BCR, October 2004.)
On a serious note, the ITU's IMS (IP Multimedia Subsystem) shows every sign of becoming
the latest example of systems standards groupthink. Its concepts are more than seven years
old and still not deployed, while its release train lengthens with functional expansions and
change requests. Even a cursory inspection of the IMS architecture reveals the complexity
that results from:
decomposing every device into its most granular functions and linkages; and
tracking and controlling every user's behavior and related billing.
The proliferation of boxes and protocols, and the state management required for data
tracking and control, lead to cognitive overload but little end user value.
It is remarkable that engineers who attend system standards bodies and use modern
Internet- and Ethernet-based tools don't apply to their work some of the simplicity learned
from years of Internet and Ethernet success: to build only what is good enough, and as simply
as possible.
Now here I have to break in: I think the syndrome is also spreading to
the IETF, becuase the IETF is starting to leave these principles behind - especially in SIP
and SIPPING, not to mention Session Border Confuser (SBC).
The lengthy and detailed effort that characterizes systems standards sometimes produces a
bit of success, as the 18 years of GSM development (1980 to 1998) demonstrate. Yet such
successes are highly optimized, very complex and thus difficult to upgrade, modify and
extend.
Email is a great example. More than 15 years of popular email usage have passed, and today
email on wireless is just beginning to approach significant usage by ordinary people.
The IMS is being hyped as a way to reduce the difficulty of integrating new services, when
in fact it may do just the opposite. IMS could well inhibit new services integration due to
its complexity and related impacts on cost, scalability, reliability, OAM, etc.
Not to mention the sad S 4 effects on all those engineers participating in
IMS-related standards efforts.
Make each program do one thing well. To do a new job, build afresh rather than
complicate old programs by adding new features.
By now, and to be frank in the last 30 years too, this is complete and utter bollocks.
Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from
formatting to "side" features, all half-assed and barely, if at all, consistent.
By now, and to be frank in the last 30 years too, this is complete and utter
bollocks.
There is not one single other idea in computing that is as unbastardised as the unix
philosophy - given that it's been around fifty years. Heck, Microsoft only just developed
PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what
is.
In that same time, we've vacillated between thick and thin computing (mainframes, thin
clients, PCs, cloud). We've rebelled against at least four major schools of program design
thought (structured, procedural, symbolic, dynamic). We've had three different database
revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate
dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video
gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins
('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros
('10s "startup culture").
It's a small miracle that iproute2only has formatting options and
grep only has --color . If they feature-crept anywhere near the same
pace as the rest of the computing world, they would probably be a RESTful SaaS microservice
with ML-powered autosuggestions.
This is because adding a new features is actually easier than trying to figure out how
to do it the Unix way - often you already have the data structures in memory and the
functions to manipulate them at hand, so adding a --frob parameter that does
something special with that feels trivial.
GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he
didn't care about it) while becoming the most available set of tools for Unix systems
didn't help either.
No, it certainly isn't. There are tons of well-designed, single-purpose tools
available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps,
well, that's your prerogative, and I don't begrudge you it, but just because you're not
aware of alternatives doesn't mean they don't exist.
typical shell tools are choke-full of spurious additions,
What does "feature creep" even mean with respect to shell tools? If they have lots of
features, but each function is well-defined and invoked separately, and still conforms to
conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is
BusyBox bloatware because it has lots of discrete shell tools bundled into a single
binary? nirreskeya
3 years ago
I have succumbed to the temptation you offered in your preface: I do write you off
as envious malcontents and romantic keepers of memories. The systems you remember so
fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out
to pasture, they are fertilizing it from below.
Your judgments are not keen, they are intoxicated by metaphor. In the Preface you
suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag.
In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and
addled by puffiness of the genome.
Yet your prison without coherent design continues to imprison you. How can this
be, if it has no strong places? The rational prisoner exploits the weak places,
creates order from chaos: instead, collectives like the FSF vindicate their jailers
by building cells almost compatible with the existing ones, albeit with more
features. The journalist with three undergraduate degrees from MIT, the researcher at
Microsoft, and the senior scientist at Apple might volunteer a few words about the
regulations of the prisons to which they have been transferred.
Your sense of the possible is in no sense pure: sometimes you want the same thing
you have, but wish you had done it yourselves; other times you want something
different, but can't seem to get people to use it; sometimes one wonders why you just
don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice,
just a future whose intellectual tone and interaction style is set by Sonic the
Hedgehog. You claim to seek progress, but you succeed mainly in whining.
Here is my metaphor: your book is a pudding stuffed with apposite observations,
many well-conceived. Like excrement, it contains enough undigested nuggets of
nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of
contempt and of envy.
"... There's still value in understanding the traditional UNIX "do one thing and do it well" model where many workflows can be done as a pipeline of simple tools each adding their own value, but let's face it, it's not how complex systems really work, and it's not how major applications have been working or been designed for a long time. It's a useful simplification, and it's still true at /some/ level, but I think it's also clear that it doesn't really describe most of reality. ..."
There's still value in understanding the traditional UNIX "do one thing and do it
well" model where many workflows can be done as a pipeline of simple tools each adding their
own value, but let's face it, it's not how complex systems really work, and it's not how
major applications have been working or been designed for a long time. It's a useful
simplification, and it's still true at /some/ level, but I think it's also clear that it
doesn't really describe most of reality.
http://www.itwire.com/business-it-news/open-source/65402-torvalds-says-he-has-no-strong-opinions-on-systemd
Almost nothing on the Desktop works as the original Unix inventors prescribed as the "Unix
way", and even editors like "Vim" are questionable since it has integrated syntax
highlighting and spell checker. According to dogmatic Unix Philosophy you should use "ed, the
standard editor" to compose the text and then pipe your text into "spell". Nobody really
wants to work that way.
But while "Unix Philosophy" in many ways have utterly failed as a way people actually work
with computers and software, it is still very good to understand, and in many respects still
very useful for certain things. Personally I love those standard Linux text tools like
"sort", "grep" "tee", "sed" "wc" etc, and they have occasionally been very useful even
outside Linux system administration.
Boston Dynamics, a robotics company known for its four-legged robot "dog," this week
announced a new product, a computer-vision enabled mobile warehouse robot named "Stretch."
Developed in response to growing demand for automation in warehouses, the robot can reach up
to 10 feet inside of a truck to pick up and unload boxes up to 50 pounds each. The robot has a
mobile base that can maneuver in any direction and navigate obstacles and ramps, as well as a
robotic arm and a gripper. The company estimates that there are more than 500 billion boxes
annually that get shipped around the world, and many of those are currently moved manually.
"It's a pretty arduous job, so the idea with Stretch is that it does the manual labor part
of that job," said Robert Playter, chief executive of the Waltham, Mass.-based company.
The pandemic has accelerated [automation of] e-commerce and logistics operations even more
over the past year, he said.
... ... ...
... the robot was made to recognize boxes of different sizes, textures and colors. For
example, it can recognize both shrink-wrapped cases and cardboard boxes.
Eventually, Stretch could move through an aisle of a warehouse, picking up different
products and placing them on a pallet, Mr. Playter said.
I keep happening on these mentions of manufacturing jobs succumbing to automation, and I
can't think of where these people are getting their information.
I work in manufacturing. Production manufacturing, in fact, involving hundreds, thousands,
tens of thousands of parts produced per week. Automation has come a long way, but it also
hasn't. A layman might marvel at the technologies while taking a tour of the factory, but
upon closer inspection, the returns are greatly diminished in the last two decades. Advances
have afforded greater precision, cheaper technologies, but the only reason China is a giant
of manufacturing is because labor is cheap. They automate less than Western factories, not
more, because humans cost next to nothing, but machines are expensive.
"... I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. ..."
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Discussion (12)
Subscribe
Shrek: Object-oriented programs are like onions. Donkey: They stink? Shrek: Yes. No. Donkey: Oh, they make you cry. Shrek: No. Donkey: Oh, you leave em out in the sun, they get all brown, start sproutin’ little white hairs. Shrek: No. Layers. Onions have layers. Object-oriented programs have layers. Onions have layers. You get it? They both have
layers. Donkey: Oh, they both have layers. Oh. You know, not everybody like onions. 8 likes
Reply Dec 8 '18
Unrelated, but I love both spaghetti and lasagna đź‹ 6 likes
Reply
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to
solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing
that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental
problem in software development... 4 likes
Reply
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code. 2 likes
Reply
Inheritance is my preferred option for things that model type hierarchies. For example, widgets in a UI, or literal types in a
compiler.
One reason inheritance is over-used is because languages don't offer enough options to do composition correctly. It ends up becoming
a lot of boilerplate code. Proper support for mixins would go a long way to reducing bad inheritance. 2 likes
Reply
It is always up to the task. For small programms of course you don't need so many layers, interfaces and so on. For a bigger,
more complex one you need it to avoid a lot of issues: code duplications, unreadable code, constant merge conflicts etc. 2 likes
Reply
I'm building a personal project as a mean to get something from zero to production for learning purpose, and I am struggling with
wiring the front-end with the back. Either I dump all the code in the fetch callback or I use DTOs, two sets of interfaces to describe
API data structure and internal data structure... It's a mess really, but I haven't found a good level of compromise. 2 likes
Reply
It's interesting, because a project that gets burned by spaghetti can drift into lasagna code to overcompensate. Still bad, but lasagna
code is somewhat more manageable (just a huge headache to reason about).
But having an ungodly combination of those two... I dare not think about it. shudder 2 likes
Reply
Sidenote before I finish listening: I appreciate that I can minimize the browser on mobile and have this keep playing, unlike
with others apps(looking at you, YouTube). 2 likes
Reply
The pasta theory is a theory of programming. It is a common analogy for application development describing different programming
structures as popular pasta dishes. Pasta theory highlights the shortcomings of the code. These analogies include spaghetti, lasagna
and ravioli code.
Code smells or anti-patterns are a common classification of source code quality. There is also classification based on food which
you can find on Wikipedia.
Spaghetti code is a pejorative term for source code that has a complex and tangled control structure, especially one using many
GOTOs, exceptions, threads, or other “unstructured†branching constructs. It is named such because program flow tends to look
like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers
and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence
of spaghetti code.
Ravioli code
Ravioli code is a type of computer program structure, characterized by a number of small and (ideally) loosely-coupled software
components. The term is in comparison with spaghetti code, comparing program structure to pasta; with ravioli (small pasta pouches
containing cheese, meat, or vegetables) being analogous to objects (which ideally are encapsulated modules consisting of both code
and data).
Lasagna code
Lasagna code is a type of program structure, characterized by several well-defined and separable layers, where each layer of code
accesses services in the layers below through well-defined interfaces. The term is in comparison with spaghetti code, comparing program
structure to pasta.
Spaghetti with meatballs
The term “spaghetti with meatballs†is a pejorative term used in computer science to describe loosely constructed object-oriented
programming (OOP) that remains dependent on procedural code. It may be the result of a system whose development has transitioned
over a long life-cycle, language constraints, micro-optimization theatre, or a lack of coherent coding standards.
Do you know about other interesting source code classification?
When introducing a new tool, programming language, or dependency into your environment, what
steps do you take to evaluate it? In this article, I will walk through a six-question framework
I use to make these determinations.
What problem am I trying to solve?
We all get caught up in the minutiae of the immediate problem at hand. An honest, critical
assessment helps divulge broader root causes and prevents micro-optimizations.
Let's say you are experiencing issues with your configuration management system. Day-to-day
operational tasks are taking longer than they should, and working with the language is
difficult. A new configuration management system might alleviate these concerns, but make sure
to take a broader look at this system's context. Maybe switching from virtual machines to
immutable containers eases these issues and more across your environment while being an
equivalent amount of work. At this point, you should explore the feasibility of more
comprehensive solutions as well. You may decide that this is not a feasible project for the
organization at this time due to a lack of organizational knowledge around containers, but
conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next
quarter.
This intellectual exercise helps you drill down to the root causes and solve core issues,
not the symptoms of larger problems. This is not always going to be possible, but be
intentional about making this decision.
Now that we have identified the problem, it is time for critical evaluation of both
ourselves and the selected tool.
A particular technology might seem appealing because it is new because you read a cool blog
post about it or you want to be the one giving a conference talk. Bells and whistles can be
nice, but the tool must resolve the core issues you identified in the first
question.
What am I giving up?
The tool will, in fact, solve the problem, and we know we're solving the right
problem, but what are the tradeoffs?
These considerations can be purely technical. Will the lack of observability tooling prevent
efficient debugging in production? Does the closed-source nature of this tool make it more
difficult to track down subtle bugs? Is managing yet another dependency worth the operational
benefits of using this tool?
Additionally, include the larger organizational, business, and legal contexts that you
operate under.
Are you giving up control of a critical business workflow to a third-party vendor? If that
vendor doubles their API cost, is that something that your organization can afford and is
willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of
proprietary information? Does the software licensing make this difficult to use
commercially?
While not simple questions to answer, taking the time to evaluate this upfront will save you
a lot of pain later on.
Is the project or vendor healthy?
This question comes with the addendum "for the balance of your requirements." If you only
need a tool to get your team over a four to six-month hump until Project X is
complete, this question becomes less important. If this is a multi-year commitment and the tool
drives a critical business workflow, this is a concern.
When going through this step, make use of all available resources. If the solution is open
source, look through the commit history, mailing lists, and forum discussions about that
software. Does the community seem to communicate effectively and work well together, or are
there obvious rifts between community members? If part of what you are purchasing is a support
contract, use that support during the proof-of-concept phase. Does it live up to your
expectations? Is the quality of support worth the cost?
Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as
well. Something might hit the front page of a news aggregator and receive attention for a few
days, but a deeper look might reveal that only a couple of core developers are actually working
on a project, and they've had difficulty finding outside contributions. Maybe a tool is open
source, but a corporate-funded team drives core development, and support will likely cease if
that organization abandons the project. Perhaps the API has changed every six months, causing a
lot of pain for folks who have adopted earlier versions.
What are the risks?
As a technologist, you understand that nothing ever goes as planned. Networks go down,
drives fail, servers reboot, rows in the data center lose power, entire AWS regions become
inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.
Ask yourself how this tooling could fail and what the impact would be. If you are adding a
security vendor product to your CI/CD pipeline, what happens if the vendor goes
down?
This brings up both technical and business considerations. Do the CI/CD pipelines simply
time out because they can't reach the vendor, or do you have it "fail open" and allow the
pipeline to complete with a warning? This is a technical problem but ultimately a business
decision. Are you willing to go to production with a change that has bypassed the security
scanning in this scenario?
Obviously, this task becomes more difficult as we increase the complexity of the system.
Thankfully, sites like k8s.af consolidate example
outage scenarios. These public postmortems are very helpful for understanding how a piece of
software can fail and how to plan for that scenario.
What are the costs?
The primary considerations here are employee time and, if applicable, vendor cost. Is that
SaaS app cheaper than more headcount? If you save each developer on the team two hours a day
with that new CI/CD tool, does it pay for itself over the next fiscal year?
Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral
if you save the dev team a couple of hours a day, but you're removing a huge blocker in their
daily workflow, and they would be much happier for it. That happiness is likely worth the
financial cost. Onboarding new developers is costly, so don't underestimate the value of
increased retention when making these calculations.
I hope you've found this framework insightful, and I encourage you to incorporate it into
your own decision-making processes. There is no one-size-fits-all framework that works for
every decision. Don't forget that, sometimes, you might need to go with your gut and make a
judgment call. However, having a standardized process like this will help differentiate between
those times when you can critically analyze a decision and when you need to make that leap.
A colleague of mine today committed a class called ThreadLocalFormat , which basically moved instances of Java Format
classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated
that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many".
He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it
was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a
case of "premature optimization" and how bad is it really?
design architecture optimization
quality-attributesshare improve this question
follow edited Dec 5 '19 at 3:54
community wiki 3 revs, 3 users 67% Craig Day
Alex ,
I think you need to distinguish between premature optimization, and unnecessary optimization. Premature to me suggests 'too early
in the life cycle' whereas unnecessary suggests 'does not add significant value'. IMO, requirement for late optimization implies
shoddy design. – Shane MacLaughlin
Oct 17 '08 at 8:53
2 revs, 2 users 92% , 2014-12-11 17:46:38
345
It's important to keep in mind the full quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet
we should not pass up our opportunities in that critical 3%.
What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get
a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that
isn't a trivially clear optimization should be avoided until it can be measured.
Being from Donald Knuth, I wouldn't be surprized if he had some evidence to back it up. BTW, Src: Structured Programming with
go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.
citeseerx.ist.psu.edu/viewdoc/
– mctylr Mar 1 '10 at 17:57
2 revs, 2 users 90% , 2015-10-06 13:07:11
120
Premature micro optimizations are the root of all evil, because micro optimizations leave out context. They almost never behave
the way they are expected.
What are some good early optimizations in the order of importance:
Architectural optimizations (application structure, the way it is componentized and layered)
Data flow optimizations (inside and outside of application)
Some mid development cycle optimizations:
Data structures, introduce new data structures that have better performance or lower overhead if necessary
Algorithms (now its a good time to start deciding between quicksort3 and heapsort ;-) )
Some end development cycle optimizations
Finding code hotpots (tight loops, that should be optimized)
Profiling based optimizations of computational parts of the code
Micro optimizations can be done now as they are done in the context of the application and their impact can be measured
correctly.
Not all early optimizations are evil, micro optimizations are evil if done at the wrong time in the development life cycle
, as they can negatively affect architecture, can negatively affect initial productivity, can be irrelevant performance wise or
even have a detrimental effect at the end of development due to different environment conditions.
If performance is of concern (and always should be) always think big . Performance is a bigger picture and not about things
like: should I use int or long ?. Go for Top Down when working with performance instead of Bottom Up
. share improve this answer follow
edited Oct 6 '15 at 13:07 community
wiki 2 revs, 2 users 90% Pop Catalin
Here Here! Unconsidered optimization makes code un-maintainable and is often the cause of performance problems. e.g. You multi-thread
a program because you imagine it might help performance, but, the real solution would have been multiple processes which are now
too complex to implement. –
James Anderson May 2 '12 at 5:01
John Mulder , 2008-10-17 08:42:58
45
Optimization is "evil" if it causes:
less clear code
significantly more code
less secure code
wasted programmer time
In your case, it seems like a little programmer time was already spent, the code was not too complex (a guess from your comment
that everyone on the team would be able to understand), and the code is a bit more future proof (being thread safe now, if I understood
your description). Sounds like only a little evil. :)
share improve this answer follow answered
Oct 17 '08 at 8:42 community
wiki John Mulder
mattnz ,
Only if the cost, it terms of your bullet points, is greater than the amortized value delivered. Often complexity introduces value,
and in these cases one can encapsulate it such that it passes your criteria. It also gets reused and continues to provide more
value. – Shane MacLaughlin
Oct 17 '08 at 10:36
Michael Shaw , 2020-06-16 10:01:49
42
I'm surprised that this question is 5 years old, and yet nobody has posted more of what Knuth had to say than a couple of sentences.
The couple of paragraphs surrounding the famous quote explain it quite well. The paper that is being quoted is called "
Structured Programming with go to
Statements ", and while it's nearly 40 years old, is about a controversy and a software movement that both no longer exist,
and has examples in programming languages that many people have never heard of, a surprisingly large amount of what it said still
applies.
Here's a larger quote (from page 8 of the pdf, page 268 in the original):
The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant.
The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe
this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't
debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is
never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother
making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict
myself to tools that deny me such efficiencies.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about,
or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong
negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about
97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by
such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is
often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience
of programmers who have been using measurement tools has been that their intuitive guesses fail.
Another good bit from the previous page:
My own programming style has of course changed during the last decade, according to the trends of the times (e.g., I'm not
quite so tricky anymore, and I use fewer go to's), but the major change in my style has been due to this inner loop phenomenon.
I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data
structure (as in the change from Example 1 to Example 2) so that some of the operations can be eliminated. The reasons for
this approach are that: a) it doesn't take long, since the inner loop is short; b) the payoff is real; and c) I can then afford
to be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.
I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could
probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log
N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm
and then throw it away completely in favor of a O(log N) approach.
Note the words may be : if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt
if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you
already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload
in the long run.
Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously
choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is
no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for
example,
that strings cost more than numbers
that dynamic languages are much slower than statically-typed languages
the advantages of array/vector lists over linked lists, and vice versa
when to use a hashtable, when to use a sorted map, and when to use a heap
that (if they work with mobile devices) "double" and "int" have similar performance on desktops (FP may even be faster)
but "double" may be a hundred times slower on low-end mobile devices without FPUs;
that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower
than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).
And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools
for the job.
Having plenty of knowledge and a personal toolbox enables you to optimize almost effortlessly. Putting a lot of effort into
an optimization that might be unnecessary is evil (and I admit to falling into that trap more than once). But when optimization
is as easy as picking a set/hashtable instead of an array, or storing a list of numbers in double[] instead of string[], then
why not? I might be disagreeing with Knuth here, I'm not sure, but I think he was talking about low-level optimization whereas
I am talking about high-level optimization.
Remember, that quote is originally from 1974. In 1974 computers were slow and computing power was expensive, which gave some
developers a tendency to overoptimize, line-by-line. I think that's what Knuth was pushing against. He wasn't saying "don't worry
about performance at all", because in 1974 that would just be crazy talk. Knuth was explaining how to optimize; in short, one
should focus only on the bottlenecks, and before you do that you must perform measurements to find the bottlenecks.
Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions
must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong. For this
reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run
a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and
you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to
optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must
always be balanced against the need for readability, maintainability, elegance, extensibility, and so on.
Awesome video, I loved watching it. In my experience, there are many situations where,
like you pointed out, procedural style makes things easier and prevents you from overthinking
and overgeneralizing the problem you are trying to tackle. However, in some cases,
object-oriented programming removes unnecessary conditions and switches that make your code
harder to read. Especially in complex game engines where you deal with a bunch of objects
which interact in diverse ways to the environment, other objects and the physics engine. In a
procedural style, a program like this would become an unmanageable clutter of flags,
variables and switch-statements. Therefore, the statement "Object-Oriented Programming is
Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers
can use - and just like you would not use pliers to get a nail into a wall, you should not
force yourself to use object-oriented programming to solve every problem at hand. Instead,
you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you
would realize such a complex program. Maybe I'm wrong and procedural programming is the best
solution in any case - but right now, I think you need to differentiate situations which
require a procedural style from those that require an object-oriented style.
I have been brainwashed with c++ for 20 years. I have recently switched to ANSI C and my
mind is now free. Not only I feel free to create design that are more efficient and elegant,
but I feel in control of what I do.
You make a lot of very solid points. In your refactoring of the Mapper interface to a
type-switch though: what is the point of still using a declared interface here? If you are
disregarding extensibility (which would require adding to the internal type switch, rather
than conforming a possible new struct to an interface) anyway, why not just make Mapper of
type interface{} and add a (failing) default case to your switch?
I recommend to install the Gosublime extension, so your code gets formatted on save and
you can use autocompletion. But looks good enough. But I disagree with large functions. Small
ones are just easier to understand and test.
Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to
say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when
it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in
our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface
also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all
coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I
don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you
have is a hammer, everything looks like a nail. So if you use OO on everything then you
sometimes end up with garbage.
Loving the series. The hardest part of actually becoming an efficient programmer is
unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been
starting with C++ then reducing everything into procedural functions and tightly-packed data
structs. Just by doing that I reduced static memory use and compiled program size at least
10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I
never knew you could nest a function within a function, I had to try that right away.
I have a design for a networked audio platform that goes into large buildings (over 11
stories) and can have 250 networked nodes (it uses an E1 style robbed bit networking system)
and 65K addressable points (we implemented 1024 of them for individual control by grouping
them). This system ties to a fire panel at one end with a microphone and speakers at the
other end. You can manually select any combination of points to page to, or the fire panel
can select zones to send alarm messages to. It works in real time with 50mS built in delays
and has access to 12 audio channels. What really puts the frosting on this cake is, the CPU
is an i8051 running at 18MHz and the code is a bit over 200K bytes that took close to 800K
lines of code. In assembler. And it took less than a Year from concept to first installation.
By one designer/coder. The only OOP in this code was when an infinite loop happened or a bug
crept in - "OOPs!"
There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my
friend. General idea is to declare a struct inside which a function can be declared. Since
you can declare structs inside functions, you can safely use it as a wrapper for your
function-inside-function declaration. This has been done in MSVC but I believe it will
compile in gcc too.
"Is pixel an object or a group of objects? Is there a container? Do I have to ask a
factory to get me a color?" I literally died there... that's literally the best description
of my programming for the last 5 years.
It's really sad that we are only taught OOP and no other paradigms in our college, when I
discovered programming I had no idea about OOP and it was really easy to build programs, bt
then I came across OOP:"how to deconstruct a problem statement into nouns for objects and
verbs for methods" and it really messed up my thinking, I have been struggling for a long
time on how to organize my code on the conceptual level, only recently I realized that OOP is
the reason for this struggle, handmadehero helped alot to bring me back to the roots of how
programming is done, remember never push OOP into areas where it is not needed, u don't have
to model ur program as real world entities cause it's not going to run on real world, it's
going to run on CPU!
I lost an entire decade to OOP, and agree with everything Casey said here. The code I
wrote in my first year as a programmer (before OOP) was better than the code I wrote in my
15th year (OOP expert). It's a shame that students are still indoctrinated into this
regressive model.
Unfortunately, when I first started programming, I encountered nothing but tutorials that
jumped right into OOP like it was the only way to program. And of course I didn't know any
better! So much friction has been removed from my process since I've broken free from that
state of mind. It's easier to judge when objects are appropriate when you don't think they're
always appropriate!
"It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the
fundamental particle of computing that some people want it to be. When blindly applied to
problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet
there's often an aesthetic insistence on objects for everything all the way down. That's too
bad, because it makes it harder to identify the cases where an object-oriented style truly
results in an overall simplicity and ease of understanding." -
https://prog21.dadgum.com/156.html
The first language I was taught was Java, so I was taught OOP from the get go. Removing
the OOP mindset was actually really easy, but what was left stuck in my head is the practice
of having small functions and make your code look artificially "clean". So I am in a constant
struggle of refactoring and not refactoring, knowing that over-refactoring will unnecessarily
complicate my codebase if it gets big. Even after removing my OOP mindset, my emphasis is
still on the code itself, and that is much harder to cure in comparison.
"I want to emphasize that the problem with object-oriented programming is not the concept
that there could be an object. The problem with it is the fact that you're orienting your
program, the thinking, around the object, not the function. So it's the orientation that's
bad about it, NOT whether you end up with an object. And it's a really important distinction
to understand."
Nicely stated, HH. On youtube, MPJ, Brian Will, and Jonathan Blow also address this
matter. OOP sucks and can be largely avoided. Even "reuse" is overdone. Straightline probably
results in faster execution but slightly greater memory use. But memory is cheap and the
resultant code is much easier to follow. Learn a little assembly language. X86 is fascinating
and you'll know what the computer is actually doing.
I think schools should teach at least 3 languages / paradigms, C for Procedural, Java for
OOP, and Scheme (or any Lisp-style languages) for Functional paradigms.
It sounds to me like you're describing JavaScript framework programming that people learn
to start from. It hasn't seemed to me like object-oriented programmers who aren't doing web
stuff have any problem directly describing an algorithm and then translating it into
imperative or functional or just direct instructions for a computer. it's quite possible to
use object-oriented languages or languages that support object-oriented stuff to directly
command a computer.
I dunno man. Object oriented programming can (sometimes badly) solve real problems -
notably polymorphism. For example, if you have a Dog and a Cat sprite and they both have a
move method. The "non-OO" way Casey does this is using tagged unions - and that was not an
obvious solution when I first saw it. Quite glad I watched that episode though, it's very
interesting! Also see this tweet thread from Casey -
https://twitter.com/cmuratori/status/1187262806313160704
My deepest feeling after crossing so many discussions and books about this is a sincere
YES.
Without entering in any technical details about it, because even after some years I
don’t find myself qualified to talk about this (is there someone who really understand
it completely?), I would argument that the main problem is that every time a read something
about OOP it is trying to justify why it is “so good”.
Then, a huge amount of examples are shown, many arguments, and many expectations are
created.
It is not stated simply like this: “oh, this is another programming paradigm.”
It is usually stated that: “This in a fantastic paradigm, it is better, it is simpler,
it permits so many interesting things, … it is this, it is that… and so on.
What happens is that, based on the “good” arguments, it creates some
expectation that things produced with OOP should be very good. But, no one really knows if
they are doing it right. They say: the problem is not the paradigm, it is you that are not
experienced yet. When will I be experienced enough?
Are you following me? My feeling is that the common place of saying it is so good at the
same time you never know how good you are actually being makes all of us very frustrated and
confuse.
Yes, it is a great paradigm since you see it just as another paradigm and drop all the
expectations and excessive claiming that it is so good.
It seems to me, that the great problem is that huge propaganda around it, not the paradigm
itself. Again, if it had a more humble claim about its advantages and how difficult is to
achieve then, people would be much less frustrated.
Sourav
Datta , A programmer trying find the ultimate source code of life.
Answered August 6, 2015 · Author has 145 answers and 292K answer views
In recent years, OOP is indeed being regarded as a overrated paradigm by many. If we look
at the most recent famous languages like Go and Rust, they do not have the traditional OO
approaches in language design. Instead, they choose to pack data into something akin to
structs in C and provide ways to specify "protocols" (similar to interfaces/abstract methods) which can work on those packed
data...
The last decade has seen object oriented programming (OOP) dominate the programming world.
While there is no doubt that there are benefits of OOP, some programmers question whether OOP
has been over rated and ponder whether alternate styles of coding are worth pursuing. To even
suggest that OOP has in some way failed to produce the quality software we all desire could in
some instances cost a programmer his job, so why even ask the question ?
Quality software is the goal.
Likely all programmers can agree that we all want to produce quality software. We would like
to be able to produce software faster, make it more reliable and improve its performance. So
with such goals in mind, shouldn't we be willing to at least consider all possibilities ? Also
it is reasonable to conclude that no single tool can match all situations. For example, while
few programmers today would even consider using assembler, there are times when low level
coding such as assembler could be warranted. The old adage applies "the right tool for the
job". So it is fair to pose the question, "Has OOP been over used to the point of trying to
make it some kind of universal tool, even when it may not fit a job very well ?"
Others are asking the same question.
I won't go into detail about what others have said about object oriented programming, but I
will simply post some links to some interesting comments by others about OOP.
I have watched a number of videos online and read a number of articles by programmers about
different concepts in programming. When OOP is discussed they talk about thinks like modeling
the real world, abtractions, etc. But two things are often missing in such discussions, which I
will discuss here. These two aspects greatly affect programming, but may not be discussed.
First is, what is programming really ? Programming is a method of using some kind of human
readable language to generate machine code (or scripts eventually read by machine code) so one
can make a computer do a task. Looking back at all the years I have been programming, the most
profound thing I have ever learned about programming was machine language. Seeing what a CPU is
actually doing with our programs provides a great deal of insight. It helps one understand why
integer arithmetic is so much faster than floating point. It helps one understand what graphics
is really all about (simply the moving around a lot of pixels or blocks of four bytes). It
helps one understand what a procedure really must do to have parameters passed. It helps one
understand why a string is simply a block of bytes (or double bytes for unicode). It helps one
understand why we use bytes so much and what bit flags are and what pointers are.
When one looks at OOP from the perspective of machine code and all the work a compiler must
do to convert things like classes and objects into something the machine can work with, then
one very quickly begins to see that OOP adds significant overhead to an application. Also if a
programmer comes from a background of working with assembler, where keeping things simple is
critical to writing maintainable code, one may wonder if OOP is improving coding or making it
more complicated.
Second, is the often said rule of "keep it simple". This applies to programming. Consider
classic Visual Basic. One of the reasons it was so popular was that it was so simple compared
to other languages, say C for example. I know what is involved in writing a pure old fashioned
WIN32 application using the Windows API and it is not simple, nor is it intuitive. Visual Basic
took much of that complexity and made it simple. Now Visual Basic was sort of OOP based, but
actually mostly in the GUI command set. One could actually write all the rest of the code using
purely procedural style code and likely many did just that. I would venture to say that when
Visual Basic went the way of dot.net, it left behind many programmers who simply wanted to keep
it simple. Not that they were poor programmers who didn't want to learn something new, but that
they knew the value of simple and taking that away took away a core aspect of their programming
mindset.
Another aspect of simple is also seen in the syntax of some programming languages. For
example, BASIC has stood the test of time and continues to be the language of choice for many
hobby programmers. If you don't think that BASIC is still alive and well, take a look at this
extensive list of different BASIC programming languages.
While some of these BASICs are object oriented, many of them are also procedural in nature.
But the key here is simplicity. Natural readable code.
Simple and low level can work together.
Now consider this. What happens when you combine a simple language with the power of machine
language ? You get something very powerful. For example, I write some very complex code using
purely procedural style coding, using BASIC, but you may be surprised that my appreciation for
machine language (or assembler) also comes to the fore. For example, I use the BASIC language
GOTO and GOSUB. How some would cringe to hear this. But these constructs are native to machine
language and very useful, so when used properly they are powerful even in a high level
language. Another example is that I like to use pointers a lot. Oh how powerful pointers are.
In BASIC I can create variable length strings (which are simply a block of bytes) and I can
embed complex structures into those strings by using pointers. In BASIC I use the DIM AT
command, which allows me to dimension an array of any fixed data type or structure within a
block of memory, which in this case happens to be a string.
Appreciating machine code also affects my view of performance. Every CPU cycle counts. This
is one reason I use BASICs GOSUB command. It allows me to write some reusable code within a
procedure, without the need to call an external routine and pass parameters. The performance
improvement is significant. Performance also affects how I tackle a problem. While I want code
to be simple, I also want it to run as fast as possible, so amazingly some of the best
performance tips have to do with keeping code simple, with minimal overhead and also
understanding what the machine code must accomplish to do with what I have written in a higher
level language. For example in BASIC I have a number of options for the SELECT CASE structure.
One option can optimize the code using jump tables (compiler handles this), one option can
optimize if the values are only Integers or DWords. But even then the compiler can only do so
much. What happens if a large SELECT CASE has to compare dozens and dozens of string constants
to a variable length string being tested ? If this code is part of a parser, then it really can
slow things down. I had this problem in a scripting language I created for an OpenGL based 3D
custom control. The 3D scripting language is text based and has to be interpreted to generate
3D OpenGL calls internally. I didn't want the scripting language to bog things down. So what
would I do ?
The solution was simple and appreciating how the compiled machine code would have to compare
so many bytes in so many string constants, one quickly realized that the compiler alone could
not solve this. I had to think like I was an assembler programmer, but still use a high level
language. The solution was so simple, it was surprising. I could use a pointer to read the
first byte of the string being parsed. Since the first character would always be a letter in
the scripting language, this meant there were 26 possible outcomes. The SELECT CASE simply
tested for the first character value (convert to a number) which would execute fast. Then for
each letter (A,B,C, ) I would only compare the parsed word to the scripting language keywords
which started with that letter. This in essence improved speed by 26 fold (or better).
The fastest solutions are often very simple to code. No complex classes needed here. Just a
simple procedure to read through a text string using the simplest logic I could find. The
procedure is a little more complex than what I describe, but this is the core logic of the
routine.
From experience, I have found that a purely procedural style of coding, using a language
which is natural and simple (BASIC), while using constructs of the language which are closer to
pure machine (or assembler) in the language produces smaller and faster applications which are
also easier to maintain.
Now I am not saying that all OOP is bad. Nor am I saying that OOP never has a place in
programming. What I am saying though is that it is worth considering the possiblity that OOP is
not always the best solution and that there are other choices.
Here are some of my other blog articles which may interest you if this one interested
you:
Classic Visual Basic's end marked a key change in software development.
Yes it is. For application code at least, I'm pretty sure.
Not claiming any originality here, people smarter than me already noticed this fact ages
ago.
Also, don't misunderstand me, I'm not saying that OOP is bad. It probably is the best
variant of procedural programming.
Maybe the term is OOP overused to describe anything that ends up in OO systems.
Things like VMs, garbage collection, type safety, mudules, generics or declarative queries
(Linq) are a given , but they are not inherently object oriented.
I think these things (and others) are more relevant than the classic three principles.
Inheritance
Current advice is usually prefer composition over inheritance . I totally agree.
Polymorphism
This is very, very important. Polymorphism cannot be ignored, but you don't write lots of
polymorphic methods in application code. You implement the occasional interface, but not every
day.
Mostly you use them.
Because polymorphism is what you need to write reusable components, much less to use them.
Encapsulation
Encapsulation is tricky. Again, if you ship reusable components, then method-level access
modifiers make a lot of sense. But if you work on application code, such fine grained
encapsulation can be overkill. You don't want to struggle over the choice between internal and
public for that fantastic method that will only ever be called once. Except in test code maybe.
Hiding all implementation details in private members while retaining nice simple tests can be
very difficult and not worth the troulbe. (InternalsVisibleTo being the least trouble, abstruse
mock objects bigger trouble and Reflection-in-tests Armageddon).
Nice, simple unit tests are just more important than encapsulation for application code, so
hello public!
So, my point is, if most programmers work on applications, and application code is not very
OO, why do we always talk about inheritance at the job interview? 🙂
PS
If you think about it, C# hasn't been pure object oriented since the beginning (think
delegates) and its evolution is a trajectory from OOP to something else, something
multiparadigm.
For example Microsoft success was by the large part determined its alliance with IBM
in the creation of PC and then exploiting IBM ineptness to ride this via shred marketing
and alliances and "natural monopoly" tendencies in IT. MS DOS was a clone of CP/M that
was bought, extended and skillfully marketed. Zero innovation here.
Both Microsoft and Apple rely of research labs in other companies to produce
innovation which they then then produced and marketed. Even Steve Jobs smartphone was not
an innovation per se: it was just a slick form factor that was the most successful in the
market. All functionality existed in other products.
Facebook was prelude to, has given the world a glimpse into, the future.
From pure technical POV Facebook is mostly junk. It is a tremendous database of user
information which users supply themselves due to cultivated exhibitionism. Kind of private
intelligence company. The mere fact that software was written in PHP tells you something
about real Zuckerberg level.
Amazon created a usable interface for shopping via internet (creating comments
infrastructure and a usable user account database ) but this is not innovation in any
sense of the word. It prospered by stealing large part of Wall Mart logistic software
(and people) and using Wall Mart tricks with suppliers. So Bezos model was Wall Mart
clone on the Internet.
Unless something is done, Bezos will soon be the most powerful man in the world.
People like Bezos, Google founders, Zuckerberg to a certain extent are part of
intelligence agencies infrastructure. Remember Prism. So implicitly we can assume that
they all report to the head of CIA.
Artificial Intelligence, AI, is another consequence of this era of innovation that
demands our immediate attention.
There is very little intelligence in artificial intelligence :-). Intelligent behavior
of robots in mostly an illusion created by First Clark law:
If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are
global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot
be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.
However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can
overshadow any existing global variable with the same name.
Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP
is overkill.
Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code.
– Paul Uszak Sep 22 at 22:57
The OOP paradigm has been criticised for a number of reasons, including not meeting its
stated goals of reusability and modularity, [36][37]
and for overemphasizing one aspect of software design and modeling (data/objects) at the
expense of other important aspects (computation/algorithms). [38][39]
Luca Cardelli has
claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take
longer to compile, and that OOP languages have "extremely poor modularity properties with
respect to class extension and modification", and tend to be extremely complex. [36]
The latter point is reiterated by Joe Armstrong , the principal
inventor of Erlang , who is quoted as
saying: [37]
The problem with object-oriented languages is they've got all this implicit environment
that they carry around with them. You wanted a banana but what you got was a gorilla holding
the banana and the entire jungle.
A study by Potok et al. has shown no significant difference in productivity between OOP and
procedural approaches. [40]
Christopher J.
Date stated that critical comparison of OOP to other technologies, relational in
particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP;
[41]
however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind
of customizable type
system to support RDBMS .
[42]
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects,
functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden
of unneeded complexity. [43]
I find OOP technically unsound. It attempts to decompose the world in terms of interfaces
that vary on a single type. To deal with the real problems you need multisorted algebras --
families of interfaces that span multiple types. I find OOP philosophically unsound. It
claims that everything is an object. Even if it is true it is not very interesting -- saying
that everything is an object is saying nothing at all.
Paul Graham has suggested
that OOP's popularity within large companies is due to "large (and frequently changing) groups
of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one
programmer from "doing too much damage". [44]
Leo Brodie has suggested a connection between the standalone nature of objects and a
tendency to duplicate
code[45] in
violation of the don't repeat yourself principle
[46] of
software development.
Object Oriented Programming puts the Nouns first and foremost. Why would you go to such
lengths to put one part of speech on a pedestal? Why should one kind of concept take
precedence over another? It's not as if OOP has suddenly made verbs less important in the way
we actually think. It's a strangely skewed perspective.
Rich Hickey ,
creator of Clojure ,
described object systems as overly simplistic models of the real world. He emphasized the
inability of OOP to model time properly, which is getting increasingly problematic as software
systems become more concurrent. [39]
Eric S. Raymond
, a Unix programmer and
open-source
software advocate, has been critical of claims that present object-oriented programming as
the "One True Solution", and has written that object-oriented programming languages tend to
encourage thickly layered programs that destroy transparency. [48]
Raymond compares this unfavourably to the approach taken with Unix and the C programming language .
[48]
Rob Pike , a programmer
involved in the creation of UTF-8 and Go , has called object-oriented
programming "the Roman
numerals of computing" [49] and has
said that OOP languages frequently shift the focus from data structures and algorithms to types . [50]
Furthermore, he cites an instance of a Java professor whose
"idiomatic" solution to a problem was to create six new classes, rather than to simply use a
lookup table .
[51]
For efficiency sake, Objects are passed to functions NOT by their value but by
reference.
What that means is that functions will not pass the Object, but instead pass a
reference or pointer to the Object.
If an Object is passed by reference to an Object Constructor, the constructor can put that
Object reference in a private variable which is protected by Encapsulation.
But the passed Object is NOT safe!
Why not? Because some other piece of code has a pointer to the Object, viz. the code that
called the Constructor. It MUST have a reference to the Object otherwise it couldn't pass it to
the Constructor?
The Reference Solution
The Constructor will have to Clone the passed in Object. And not a shallow clone but a deep
clone, i.e. every object that is contained in the passed in Object and every object in those
objects and so on and so on.
So much for efficiency.
And here's the kicker. Not all objects can be Cloned. Some have Operating System resources
associated with them making cloning useless at best or at worst impossible.
And EVERY single mainstream OO language has this problem.
The Washington Post
Simon Denyer; Akiko Kashiwagi; Min Joo Kim
July 8, 2020
In Japan, a country with a long fascination with robots, automated assistants have offered
their services as bartenders, security guards, deliverymen, and more, since the onset of the
coronavirus pandemic. Japan's Avatarin developed the "newme" robot to allow people to be
present while maintaining social distancing during the pandemic.
The telepresence robot is
essentially a tablet on a wheeled stand with the user's face on the screen, whose location and
direction can be controlled via laptop or tablet. Doctors have used the newme robot to
communicate with patients in a coronavirus ward, while university students in Tokyo used it to
remotely attend a graduation ceremony.
The company is working on prototypes that will allow
users to control the robot through virtual reality headsets, and gloves that would permit users
to lift, touch, and feel objects through a remote robotic hand.
A robot that neutralizes aerosolized forms of the coronavirus could soon be coming to a supermarket
near you. MIT's Computer Science and Artificial Intelligence Laboratory team partnered with Ava
Robotics to develop a device that can kill roughly 90% of COVID-19 on surfaces in a 4,000-square-foot
space in 30 minutes.
"This is such an exciting idea to use the solution as a hands-free, safe way to neutralize dorms,
hallways, hospitals, airports -- even airplanes," Daniela Rus, director of the Computer Science and
Artificial Intelligence Laboratory at MIT, told Yahoo Finance's
"The
Ticker."
The key to disinfecting large spaces in a short amount of time is the UV-C light fixture
designed
at MIT
. It uses short-wavelength ultraviolet light that eliminates microorganisms by breaking down
their DNA. The UV-C light beam is attached to Ava Robotic's mobile base and can navigate a warehouse
in a similar way as a self-driving car.
"The robot is controlled by some powerful algorithms that compute exactly where the robot has to go
and how long it has to stay in order to neutralize the germs that exist in that particular part of the
space," Rus said.
Currently, the robot is being tested at the Greater Boston Food Bank's shipping area and focuses on
sanitizing products leaving the stockroom to reduce any potential threat of spreading the coronavirus
into the community.
"Here, there was a unique opportunity to provide additional disinfecting power to their current
workflow, and help reduce the risks of COVID-19 exposure," said Alyssa Pierson, CSAIL research
scientist and technical lead of the UV-C lamp assembly.
But Rus explains implementing the robot in other locations does face some challenges. "The light
emitted by the robot is dangerous to humans, so the robot cannot be in the same space as humans. Or,
if people are around the robot, they have to wear protective gear," she added.
While Rus didn't provide a specific price tag, she said the cost of the robot is still high, which may
be a hurdle for broad distribution. In the future, "Maybe you don't need to buy an entire robot set,
you can book the robot for a few hours a day to take care of your space," she said.
McKenzie Stratigopoulos is a producer at Yahoo Finance. Follow
her on Twitter:
@mckenziestrat
During the pandemic, readers may recall several of our pieces describing what life would be
like in a post corona world.
From restaurants to
flying to gambling to hotels to
gyms to interacting with people to even
housing trends - we highlighted how social distancing would transform the economy.
As the transformation becomes more evident by the week, we want to focus on automation and
artificial intelligence - and how these two things are allowing hotels, well at least one in
California, to accommodate patrons with contactless room service.
Hotel Trio in Healdsburg, California, is surrounded by wineries and restaurants in
Healdsburg/Sonoma County region, recently hired a new worker named "Rosé the Robot" that
delivers food, water, wine, beer, and other necessities, reported
Sonoma Magazine .
"As Rosé approaches a room with a delivery, she calls the phone to let the guest know
she's outside. A tablet-sized screen on Rosé's head greets the guest as they open the
door, and confirms the order. Next, she opens a lid on top of her head and reveals a storage
compartment containing the ordered items. Rosé then communicates a handful of questions
surrounding customer satisfaction via her screen. She bids farewell, turns around and as she
heads back toward her docking station near the front desk, she emits chirps that sound like a
mix between R2D2 and a little bird," said Sonoma Magazine.
Henry Harteveldt, a travel industry analyst at Atmospheric Research Group in San Francisco,
said robots would be integrated into the hotel experience.
"This is a part of travel that will see major growth in the years ahead," Harteveldt
said.
Rosé is manufactured by Savioke, a San Jose-based company that has dozens of robots
in hotels nationwide.
The tradeoff of a contactless environment where automation and artificial intelligence
replace humans to mitigate the spread of a virus is permanent
job loss .
Recently I read
Sapiens: A Brief History of Humankind
by Yuval Harari. The basic thesis of the book is that humans require 'collective fictions' so that we can collaborate in larger numbers
than the 150 or so our brains are big enough to cope with by default. Collective fictions are things that don't describe solid objects
in the real world we can see and touch. Things like religions, nationalism, liberal democracy, or Popperian falsifiability in science.
Things that don't exist, but when we act like they do, we easily forget that they don't.
Collective Fictions in IT – Waterfall
This got me thinking about some of the things that bother me today about the world of software engineering. When I started in
software 20 years ago, God was waterfall. I joined a consultancy (ca. 400 people) that wrote very long specs which were honed to
within an inch of their life, down to the individual Java classes and attributes. These specs were submitted to the customer (God
knows what they made of it), who signed it off. This was then built, delivered, and monies were received soon after. Life was simpler
then and everyone was happy.
Except there were gaps in the story – customers complained that the spec didn't match the delivery, and often the product delivered
would not match the spec, as 'things' changed while the project went on. In other words, the waterfall process was a 'collective
fiction' that gave us enough stability and coherence to collaborate, get something out of the door, and get paid.
This consultancy went out of business soon after I joined. No conclusions can be drawn from this.
Collective Fictions in IT – Startups ca. 2000
I got a job at another software development company that had a niche with lots of work in the pipe. I was employee #39. There
was no waterfall. In fact, there was nothing in the way of methodology I could see at all. Specs were agreed with a phone call. Design,
prototype and build were indistinguishable. In fact it felt like total chaos; it was against all of the precepts of my training.
There was more work than we could handle, and we got on with it.
The fact was, we were small enough not to need a collective fiction we had to name. Relationships and facts could be kept in our
heads, and if you needed help, you literally called out to the room. The tone was like this, basically:
Of course there were collective fictions, we just didn't name them:
We will never have a mission statement
We don't need HR or corporate communications, we have the pub (tough luck if you have a family)
We only hire the best
We got slightly bigger, and customers started asking us what our software methodology was. We guessed it wasn't acceptable to
say 'we just write the code' (legend had it our C-based application server – still in use and blazingly fast – was written before
my time in a fit of pique with a stash of amphetamines over a weekend. It's still in use.)
Turns out there was this thing called 'Rapid Application Development' that emphasized prototyping. We told customers we did RAD,
and they seemed happy, as it was A Thing. It sounded to me like 'hacking', but to be honest I'm not sure anyone among us really properly
understood it or read up on it.
As a collective fiction it worked, because it kept customers off our backs while we wrote the software.
Soon we doubled in size, moved out of our cramped little office into a much bigger one with bigger desks, and multiple floors.
You couldn't shout out your question to the room anymore. Teams got bigger, and these things called 'project managers' started appearing
everywhere talking about 'specs' and 'requirements gathering'. We tried and failed to rewrite our entire platform from scratch.
Yes, we were back to waterfall again, but this time the working cycles were faster and smaller, and the same problems of changing
requirements and disputes with customers as before. So was it waterfall? We didn't really know.
Collective Fictions in IT – Agile
I started hearing the word 'Agile' about 2003. Again, I don't think I properly read up on it ever, actually. I got snippets here
and there from various websites I visited and occasionally from customers or evangelists that talked about it. When I quizzed people
who claimed to know about it their explanations almost invariably lost coherence quickly. The few that really had read up on it seemed
incapable of actually dealing with the very real pressures we faced when delivering software to non-sprint-friendly customers, timescales,
and blockers. So we carried on delivering software with our specs, and some sprinkling of agile terminology. Meetings were called
'scrums' now, but otherwise it felt very similar to what went on before.
As a collective fiction it worked, because it kept customers and project managers off our backs while we wrote the software.
Since then I've worked in a company that grew to 700 people, and now work in a corporation of 100K+ employees, but the pattern
is essentially the same: which incantation of the liturgy will satisfy this congregation before me?
Don't You Believe?
I'm not going to beat up on any of these paradigms, because what's the point? If software methodologies didn't exist we'd have
to invent them, because how else would we work together effectively? You need these fictions in order to function at scale. It's
no coincidence that the Agile paradigm has such a quasi-religious hold over a workforce that is immensely fluid and mobile. (If you
want to know what I really think about software development methodologies, read
this because it lays
it out much better than I ever could.)
One of many interesting arguments in Sapiens is that because these collective fictions can't adequately explain the world, and
often conflict with each other, the interesting parts of a culture are those where these tensions are felt. Often, humour derives
from these tensions.
'The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the
ability to function.' F. Scott Fitzgerald
I don't know about you, but I often feel this tension when discussion of Agile goes beyond a small team. When I'm told in a motivational
poster written by someone I've never met and who knows nothing about my job that I should 'obliterate my blockers', and those blockers
are both external and non-negotiable, what else can I do but laugh at it?
How can you be agile when there are blockers outside your control at every turn? Infrastructure, audit, security, financial planning,
financial structures all militate against the ability to quickly deliver meaningful iterations of products. And who is the customer
here, anyway? We're talking about the square of despair:
When I see diagrams like this representing Agile I can only respond with black humour shared with my colleagues, like kids giggling
at the back of a church.
When within a smaller and well-functioning functioning team, the totems of Agile often fly out of the window and what you're left
with (when it's good) is a team that trusts each other, is open about its trials, and has a clear structure (formal or informal)
in which agreement and solutions can be found and co-operation is productive. Google recently articulated this (reported briefly
here , and more in-depth
here ).
So Why Not Tell It Like It Is?
You might think the answer is to come up with a new methodology that's better. It's not like we haven't tried:
It's just not that easy, like the book says:
'Telling effective stories is not easy. The difficulty lies not in telling the story, but in convincing everyone else to believe
it. Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods,
or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of
strangers to cooperate and work towards common goals. Just try to imagine how difficult it would have been to create states, or churches,
or legal systems if we could speak only about things that really exist, such as rivers, trees and lions.'
Let's rephrase that:
'Coming up with useful software methodologies is not easy. The difficulty lies not in defining them, but in convincing others
to follow it. Much of the history of software development revolves around this question: how does one convince engineers to believe
particular stories about the effectiveness of requirements gathering, story points, burndown charts or backlog grooming? Yet when
adopted, it gives organisations immense power, because it enables distributed teams to cooperate and work towards delivery. Just
try to images how difficult it would have been to create Microsoft, Google, or IBM if we could only speak about specific technical
challenges.'
Anyway, does the world need more methodologies? It's not like some very smart people haven't already thought about this.
Acceptance
So I'm cool with it. Lean, Agile, Waterfall, whatever, the fact is we need some kind of common ideology to co-operate in large
numbers. None of them are evil, so it's not like you're picking racism over socialism or something. Whichever one you pick is not
going to reflect the reality, but if you expect perfection you will be disappointed. And watch yourself for unspoken or unarticulated
collective fictions. Your life is full of them. Like that your opinion is important. I can't resist quoting this passage from Sapiens
about our relationship with wheat:
'The body of Homo sapiens had not evolved for [farming wheat]. It was adapted to climbing apple trees and running after gazelles,
not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons
indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias. Moreover,
the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely
changed their way of life. We did not domesticate wheat. It domesticated us. The word 'domesticate' comes from the Latin domus, which
means 'house'. Who's the one living in a house? Not the wheat. It's the Sapiens.'
Maybe we're not here to direct the code, but the code is directing us. Who's the one compromising reason and logic to grow code?
Not the code. It's the Sapiens.
https://widgets.wp.com/likes/index.html?ver=20190321#blog_id=20870870&post_id=1474&origin=zwischenzugs.wordpress.com&obj_id=20870870-1474-5efdf020c3f1f&domain=zwischenzugs.com
Related
"And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them."
Agree completely.
As for software development methodologies, I personally think that with a few tweaks the waterfall methodology could work quite
well. The key changes I'd suggest would help is to introduce developer guidance at the planning stage, including timeboxed explorations
of the feasibility of the proposals, as well as aiming for specs to outline business requirements rather than dictating how they
should be implemented.
Reply
A very entertaining article! I have as similar experience and outlook. I've not tried LEAN. I once heard a senior developer
say that methodologies were just a stick with which to beat developers. This was largely in the case of clients who agree to engage
in whatever process when amongst business people and then are absent at grooming, demos, releases, feedback meetings and so on.
When the software is delivered at progressively short notice, it's always the developer that has to carry the burden of ensuring
quality, feeling keenly responsible for the work they do (the conscientious ones anyway). Then non-technical management hide behind
the process and failing to have the client fully engaged is quickly forgotten.
It reminds me (I'm rambling now, sorry) of factory workers in the 80s complaining about working conditions and the management
nodding and smiling while doing nothing to rectify the situation and doomed to repeat the same error. Except now the workers are
intelligent and will walk, taking their business knowledge and skill set with them.
Reply
Very enjoyable. I had a stab at the small sub-trail of 'syntonicity' here:
http://www.scidata.ca/?p=895
Syntonicity is Stuart Watt's term which he probably got from Seymour Papert.
Of course, this may all become moot soon as our robot overlords take their place at the keyboard.
Reply
A great article! I was very much inspired by Yuval's book myself. So much that I wrote a post about DevOps being a collective
fiction : http://otomato.link/devops-is-a-myth/
Basically same ideas as yours but from a different angle.
Reply
I think part of the "need" for methodology is the desire for a common terminology. However, if everyone has their own view
of what these terms mean, then it all starts to go horribly wrong. The focus quickly becomes adhering to the methodology rather
than getting the work done.
Reply
A very well-written article. I retired from corporate development in 2014 but am still developing my own projects. I have written
on this very subject and these pieces have been published as well.
The idea that the Waterfall technique for development was the only one in use as we go back towards the earlier years is a
myth that has been built up by the folks who have been promoting the Agile technique, which for seniors like me has been just
another word for what we used to call "guerrilla programming". In fact, if one were to review that standards of design in software
engineering there are 13 types of design techniques, all of which have been used at one time or another by many different companies
successfully. Waterfall was just one of them and was only recommended for very large projects.
The author is correct to conclude by implication that the best technique for design and implementation is the RAD technique
promoted by Stephen McConnell of Construx and a team that can work well with other. His book, still in its first edition since
1996, is considered the Bible for software development and describes every aspect of software engineering one could require. His
point. However, his book is only suggested as a guide where engineers can pick what they really need for the development of their
projects; not hard standards. Nonetheless, McConnell stresses the need for good specifications and risk management, the latter
if not used always causes a project to fail or result in less than satisfactory results. His work is proven by over 35 years of
research
Reply
Hilarious and oh so true. Remember the first time you were being taught Agile and they told you that the stakeholders would
take responsibility for their role and decisions. What a hoot! Seriously, I guess they did used to write detailed specs, but in
my twenty some years, I've just been thrilled if I had a business analyst that knew about what they wanted
Reply
OK, here's a collective fiction for you. "Methodologies don't work. They don't reflect reality. They are just something we
tell customers because they are appalled when we admit that our software is developed in a chaotic and unprofessional manner."
This fiction serves those people who already don't like process, and gives them excuses.
We do things the same way over and over for a reason. We have traffic lights because it reduces congestion and reduces traffic
fatalities. We make cakes using a recipe because we like it when the result is consistently pleasing. So too with software methodologies.
Like cake recipes, not all software methodologies are equally good at producing a consistently good result. This fact alone should
tell you that there is something of value in the best ones. While there may be a very few software chefs who can whip up a perfect
result every time, the vast bulk of developers need a recipe to follow or the results are predictably bad.
Your diatribe against process does the community a disservice.
Reply
I have arrived at the conclusion that any and all methodologies would work – IF (and it's a big one), everyone managed to arrive
at a place where they considered the benefit of others before themselves. And, perhaps, they all used the same approach.
For me, it comes down to character rather than anything else. I can learn the skills or trade a chore with someone else.
Software developers; the ones who create "new stuff", by definition, have no roadmap. They have experience, good judgment,
the ability to 'survive in the wild', are always wanting to "see what is over there" and trust, as was noted is key. And there
are varying levels of developer. Some want to build the roads; others use the roads built for them and some want to survey for
the road yet to be built. None of these are wrong – or right.
The various methodology fights are like arguing over what side of the road to drive on, how to spell colour and color. Just
pick one, get over yourself and help your partner(s) become successful.
Ah, right Where do the various methodologies resolve greed, envy, distrust, selfishness, stepping on others for personal gain,
and all of the other REAL killers of success again?
I have seen great teams succeed and far too many fail. Those that have failed more often than not did so for character-related
issues rather than technical ones.
Reply
Before there exists any success, a methodology must freeze a definition for roles, as well as process. Unless there exist sufficient
numbers and specifications of roles, and appropriate numbers of sapiens to hold those roles, then the one on the end becomes overburdened
and triggers systemic failure.
There has never been a sufficiently-complex methodology that could encompass every field, duty, and responsibility in a software
development task. (This is one of the reasons "chaos" is successful. At least it accepts the natural order of things, and works
within the interstitial spaces of a thousand objects moving at once.)
We even lie to ourselves when we name what we're doing: Methodology. It sounds so official, so logical, so orderly. That's
a myth. It's just a way of pushing the responsibility down from the most powerful to the least powerful -- every time.
For every "methodology," who is the caboose on the end of this authority train? The "coder."
The tighter the role definitions become in any methodology, the more actual responsibilities cascade down to the "coder." If
the specs conflict, who raises his hand and asks the question? If a deadline is unreasonable, who complains? If a technique is
unusable in a situation, who brings that up?
The person is obviously the "coder." And what happens when the coder asks this question?
In one methodology the "coder" is told to stop production and raise the issue with the manager who will talk to the analyst
who will talk to the client who will complain that his instructions were clear and it all falls back to the "coder" who, obviously,
was too dim to understand the 1,200 pages of specifications the analyst handed him.
In another, the "coder" is told, "you just work it out." And the concomitant chaos renders the project unstable.
In another, the "coder" is told "just do what you're told." And the result is incompatible with the rest of the project.
I've stopped "coding" for these reasons and because everybody is happy with the myth of programming process because they aren't
the caboose.
Reply
I was going to make fun of this post for being whiney and defeatust. But the more I thought about it, the more I realized
it contained a big nugget of truth. A lot of methodologies, as practiced, have the purpose of putting off risk onto the developers,
of fixing responsibility on developers so the managers aren't responsible for any of the things that can go wrong with projects.
Reply
Great article! I have experienced the same regarding software methodologies. And at a greater level, thank you for introducing
me to the concept of collective fictions; it makes so much sense. I will be reading Sapiens.
Reply
Actually, come to think of it there are two types of Software Engineers who take process very seriously. One who is acutely
aware of software entropy and wants to pro -actively fight against it because they want to engineer to a high standard and don't
like working the weekend. So they wants things organised. Then there's another type who can come across as being a bit dogmatic.
Maybe your links with collective delusions help explain some of the human psychology here.
Reply
First of all this is a great article, very well written. A couple of remarks. Early in waterfall, the large business requirements
documents didn't work for two reasons: There was no new business process, it was the same business process that should be applied
within a new technology (from mainframes to open unix systems, from ascii to RAD tools and 4-GL languages). . Second many consultancy
companies (mostly the big 4) there were using "copy&paste" methods to fill these documents, submit the time and material forms
for the consultants, increasing the revenue and move on. Things have change with the adoption of the smartphones use etc
To reflect the author idea, to my humble opinion the collective fictions is the embedded quality of work into the whole life cycle
development
Thanks
Kostas
Reply
Sorry, did you forget to finish the article? I don't see the conclusion providing the one true programming methodology that
works in all occasions. What is the magic procedure? Thanks in advance.
Reply
I often feel this tension when discussion of Agile goes beyond a small team. When I'm told
in a motivational poster written by someone I've never met and who knows nothing about my job
that I should 'obliterate my blockers', and those blockers are both external and
non-negotiable, what else can I do but laugh at it?
How can you be agile when there are blockers outside your control at every turn?
Infrastructure, audit, security, financial planning, financial structures all militate against
the ability to quickly deliver meaningful iterations of products. And who is the customer here,
anyway? We're talking about the square of despair:
When I see diagrams like this representing Agile I can only respond with black humor shared
with my colleagues, like kids giggling at the back of a church.
When within a smaller and well-functioning functioning team, the totems of Agile often fly
out of the window and what you're left with (when it's good) is a team that trusts each other,
is open about its trials, and has a clear structure (formal or informal) in which agreement and
solutions can be found and co-operation is productive. Google recently articulated this
(reported briefly here , and more
in-depth
here ).
"Restaurant Of The Future" - KFC Unveils Automated Store With Robots And Food Lockers
by Tyler Durden Fri,
06/26/2020 - 22:05 Fast-food chain Kentucky Fried Chicken (KFC) has debuted the "restaurant of
the future," one where automation dominates the storefront, and little to no interaction is
seen between customers and employees, reported NBC News .
After the chicken is fried and sides are prepped by humans, the order is placed on a
conveyor belt and travels to the front of the store. A robotic arm waits for the order to
arrive, then grabs it off the conveyor belt and places it into a secured food
locker.
A KFC representative told NBC News that the new store is located in Moscow and was built
months before the virus outbreak. The representative said the contactless store is the future
of frontend fast-food restaurants because it's more sanitary.
Disbanding human cashiers and order preppers at the front of a fast-food store will be the
next big trend in the industry through 2030. Making these restaurants contactless between
customers and employees will lower the probabilities of transmitting the virus.
Automating the frontend of a fast-food restaurant will come at a tremendous cost, that is,
significant job loss . Nationwide (as of 2018), there were around 3.8 million employed at
fast-food restaurants. Automation and artificial intelligence are set displace millions of jobs
in the years ahead.
As for the new automated KFC restaurant in Moscow, well, it's a glimpse of what is coming to
America - this will lead to the widespread job loss that will force politicians to unveil
universal basic income .
Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and
habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster
chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than
the core innovations
their inventors claim
-- and some of the gains may not exist at all, says Davis Blalock, a computer science graduate
student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches
to improving neural networks -- software architectures that loosely mimic the brain. "Fifty papers in," he says, "it
became clear that it wasn't obvious what the state of the art even was."
The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming
unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly -- and
when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements
over a 10-year period.
The result
, presented in March at the
Machine Learning and Systems conference, surprised Blalock's Ph.D. adviser, MIT computer scientist John Guttag, who
says the uneven comparisons themselves may explain the stagnation. "It's the old saw, right?" Guttag said. "If you
can't measure something, it's hard to make it better."
Researchers are waking up to the signs of shaky progress across many subfields of AI. A
2019 meta-analysis
of information retrieval
algorithms used in search engines concluded the "high-water mark was actually set in 2009."
Another study
in 2019 reproduced seven neural
network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform
much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing
"phantom progress" in the field. In
another paper
posted on arXiv in
March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an
algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task
involving image retrieval, and found that, contrary to their developers' claims, accuracy had not improved since
2006. "There's always been these waves of hype," Musgrave says.
SIGN UP FOR OUR DAILY NEWSLETTER
Get more great content like this delivered right to you!
Required fields are indicated by an asterisk(*)
Gains in machine-learning algorithms can come from fundamental changes in their architecture, loss function, or
optimization strategy -- how they use feedback to improve. But subtle tweaks to any of these can also boost performance,
says Zico Kolter, a computer scientist at Carnegie Mellon University who studies image-recognition models trained to
be immune to "
adversarial
attacks
" by a hacker. An early adversarial training method known as projected gradient descent (PGD), in which a
model is simply trained on both real and deceptive examples, seemed to have been surpassed by more complex methods.
But in a
February arXiv paper
, Kolter and his colleagues found that
all of the methods performed about the same when a simple trick was used to enhance them.
"That was very surprising, that this hadn't been discovered before," says Leslie Rice, Kolter's Ph.D. student.
Kolter says his findings suggest innovations such as PGD are hard to come by, and are rarely improved in a
substantial way. "It's pretty clear that PGD is actually just the right algorithm," he says. "It's the obvious thing,
and people want to find overly complex solutions."
Other major algorithmic advances also seem to have stood the test of time. A big breakthrough came in 1997 with an
architecture called long short-term memory (LSTM), used in language translation. When properly trained, LSTMs
matched
the performance of supposedly more advanced
architectures developed 2 decades later. Another machine-learning breakthrough came in 2014 with generative
adversarial networks (GANs), which pair networks in a create-and-critique cycle to sharpen their ability to produce
images, for example.
A 2018
paper
reported that with enough computation, the original GAN method matches the abilities of methods from later
years.
Kolter says researchers are more motivated to produce a new algorithm and tweak it until it's state-of-the-art
than to tune an existing one. The latter can appear less novel, he notes, making it "much harder to get a paper
from."
Guttag says there's also a disincentive for inventors of an algorithm to thoroughly compare its performance with
others -- only to find that their breakthrough is not what they thought it was. "There's a risk to comparing too
carefully."
It's also hard work
: AI researchers use different data sets, tuning methods, performance metrics, and baselines.
"It's just not really feasible to do all the apples-to-apples comparisons."
Some of the overstated performance claims can be chalked up to the explosive growth of the field, where papers
outnumber experienced reviewers. "A lot of this seems to
be growing pains
," Blalock says. He urges reviewers to insist on better comparisons to benchmarks and says better
tools will help. Earlier this year, Blalock's co-author, MIT researcher Jose Gonzalez Ortiz, released software called
ShrinkBench that makes it easier to compare pruning algorithms.
Researchers point out that even if new methods aren't fundamentally better than old ones, the tweaks they
implement can be applied to their forebears. And every once in a while, a new algorithm will be an actual
breakthrough. "It's almost like a venture capital portfolio," Blalock says, "where some of the businesses are not
really working, but some are working spectacularly well."
Microsoft's EEE tactics which can be redefined as "Steal; Add complexity and bloat; trash original" can be
used on open source and as success of systemd has shown can be pretty successful strategy.
Notable quotes:
"... Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that. ..."
...it was developed along lines that are not entirely different from
Microsoft's EEE tactics -- which today I will offer a new acronym and description for:
1. Steal
2. Add Bloat
3. Original Trashed
It's difficult conceptually to "steal" Free software, because it (sort of, effectively)
belongs to everyone. It's not always Public Domain -- copyleft is meant to prevent that. The
only way you can "steal" free software is by taking it from everyone and restricting it again.
That's like "stealing" the ocean or the sky, and putting it somewhere that people can't get to
it. But this is what non-free software does. (You could also simply go against the license
terms, but I doubt Stallman would go for the word "stealing" or "theft" as a first choice to
describe non-compliance).
... ... ...
Again and again, Microsoft "Steals" or "Steers" the development process itself so it
can gain control (pronounced: "ownership") of the software. It is a gradual process, where
Microsoft has more and more influence until they dominate the project and with it, the user.
This is similar to the process where cults (or drug addiction) take over people's lives, and
similar to the process where narcissists interfere in the lives of others -- by staking a claim
and gradually dominating the person or project.
Then they Add Bloat -- more features. GitHub is friendly to use, you don't have to care
about how Git works to use it (this is true of many GitHub clones as well, as even I do not
really care how Git works very much. It took a long time for someone to even drag me towards
GitHub for code hosting, until they were acquired and I stopped using it) and due to its GLOBAL
size, nobody can or ought to reproduce its network effects.
I understand the draw of network effects. That's why larger federated instances of code
hosts are going to be more popular than smaller instances. We really need a mix -- smaller
instances to be easy to host and autonomous, larger instances to draw people away from even
more gigantic code silos. We can't get away from network effects (just like the War on Drugs
will never work) but we can make them easier and less troublesome (or safer) to deal with.
Finally, the Original is trashed, and the SABOTage is complete. This has happened with
Python against Python 2, despite protests from seasoned and professional developers, it was
deliberately attempted with Systemd against not just sysvinit but ALL alternatives -- Free
software acts like proprietary software when it treats the existence of alternatives as a
problem to be solved. I personally never trust a project with developers as arrogant as
that.
... ... ...
There's a meme about creepy vans with "FREE CANDY" painted on the side, which I took one of
the photos from and edited it so that it said "FEATURES" instead. This is more or less how I
feel about new features in general, given my experience with their abuse in development,
marketing and the takeover of formerly good software projects.
People then accuse me of being against features, of course. As with the Dijkstra article,
the real problem isn't Basic itself. The problem isn't features per se (though they do play a
very key role in this problem) and I'm not really against features -- or candy, for that
matter.
I'm against these things being used as bait, to entrap people in an unpleasant situation
that makes escape difficult. You know, "lock-in". Don't get in the van -- don't even go NEAR
the van.
Candy is nice, and some features are nice too. But we would all be better off if we could
get the candy safely, and delete the creepy horrible van that comes with it. That's true
whether the creepy van is GitHub, or surveillance by GIAFAM, or a Leviathan "init" system, or
just breaking decades of perfectly good Python code, to try to force people to develop
differently because Google or Microsoft (who both have had heavy influence over newer Python
development) want to try to force you to -- all while using "free" software.
If all that makes free software "free" is the license -- (yes, it's the primary and key
part, it's a necessary ingredient) then putting "free" software on GitHub shouldn't be a
problem, right? Not if you're running LibreJS, at least.
In practice, "Free in license only" ignores the fact that if software is effectively free,
the user is also effectively free. If free software development gets dragged into doing the
bidding of non-free software companies and starts creating lock-in for the user, even if it's
external or peripheral, then they simply found an effective way around the true goal of the
license. They did it with Tivoisation, so we know that it's possible. They've done this in a
number of ways, and they're doing it now.
If people are trying to make the user less free, and they're effectively making the user
less free, maybe the license isn't an effective monolithic solution. The cost of freedom is
eternal vigilance. They never said "The cost of freedom is slapping a free license on things",
as far as I know. (Of course it helps). This really isn't a straw man, so much as a rebuttal to
the extremely glib take on software freedom in general that permeates development communities
these days.
But the benefits of Free software, free candy and new features are all meaningless, if the
user isn't in control.
Don't get in the van.
"The freedom to NOT run the software, to be free to avoid vendor lock-in through
appropriate modularization/encapsulation and minimized dependencies; meaning any free software
can be replaced with a user's preferred alternatives (freedom 4)." – Peter
Boughton
Relatively simple automation often beat more complex system. By far.
Notable quotes:
"... My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators. ..."
"... I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell. ..."
"... One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active. ..."
"... "Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine. ..."
"... You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. ..."
"... I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization. ..."
"... I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV. ..."
"... Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question. ..."
"... The Ten Commandments do not apply to corporations. ..."
"... But what happens when the bread machine is connected to the internet, can't function without an active internet connection, and requires an annual subscription to use? ..."
"... Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits. ..."
"... When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot". ..."
"... The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate. ..."
"... And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble. ..."
Part I , "Automation Armageddon:
a Legitimate Worry?" reviewed the history of automation, focused on projections of gloom-and-doom.
"It smells like death," is how a friend of mine described a nearby chain grocery store. He tends to exaggerate and visiting France
admittedly brings about strong feelings of passion. Anyway, the only reason we go there is for things like foil or plastic bags that
aren't available at any of the smaller stores.
Before getting to why that matters – and, yes, it does matter – first a tasty digression.
I live in a French village. To the French, high-quality food is a vital component to good life.
My daughter counts eight independent bakeries on the short drive between home and school. Most are owned by a couple of people.
Counting high-quality bakeries embedded in grocery stores would add a few more. Going out of our way more than a minute or two would
more than double that number.
Typical Bakery: Bread is cooked at least twice daily
Despite so many, the bakeries seem to do well. In the half-decade I've been here, three new ones opened and none of the old ones
closed. They all seem to be busy. Bakeries are normally owner operated. The busiest might employ a few people but many are mom-and-pop
operations with him baking and her selling. To remain economically viable, they rely on a dance of people and robots. Flour arrives
in sacks with high-quality grains milled by machines. People measure ingredients, with each bakery using slightly different recipes.
A human-fed robot mixes and kneads the ingredients into the dough. Some kind of machine churns the lumps of dough into baguettes.
The baker places the formed baguettes onto baking trays then puts them in the oven. Big ovens maintain a steady temperature while
timers keep track of how long various loaves of bread have been baking. Despite the sensors, bakers make the final decision when
to pull the loaves out, with some preferring a bien cuit more cooked flavor and others a softer crust. Finally, a person uses
a robot in the form of a cash register to ring up transactions and processes payments, either by cash or card.
Nobody -- not the owners, workers, or customers -- think twice about any of this. I doubt most people realize how much automation
technology is involved or even that much of the equipment is automation tech. There would be no improvement in quality mixing and
kneading the dough by hand. There would, however, be an enormous increase in cost. The baguette forming machines churn out exactly
what a person would do by hand, only faster and at a far lower cost. We take the thermostatically controlled ovens for granted. However,
for anybody who has tried to cook over wood controlling heat via air and fuel, thermostatically controlled ovens are clearly automation
technology.
Is the cash register really a robot? James Ritty, who invented
it, didn't think so; he sold the patent for cheap. The person who bought the patent built it into NCR, a seminal company laying the
groundwork of the modern computer revolution.
Would these bakeries be financially viable if forced to do all this by hand? Probably not. They'd be forced to produce less output
at higher cost; many would likely fail. Bread would cost more leaving less money for other purchases. Fewer jobs, less consumer spending
power, and hungry bellies to boot; that doesn't sound like good public policy.
Getting back to the grocery store my friend thinks smells like death; just a few weeks ago they started using robots in a new
and, to many, not especially welcome way.
As any tourist knows, most stores in France are closed on Sunday afternoons, including and especially grocery stores. That's part
of French labor law: grocery stores must close Sunday afternoons. Except that the chain grocery store near me announced they are
opening Sunday afternoon. How? Robots, and sleight-of-hand. Grocers may not work on Sunday afternoons but guards are allowed.
Not my store but similar.
Dimanche means Sunday. Aprés-midi means afternoon.
I stopped in to get a feel for how the system works. Instead of grocers, the store uses security guards and self-checkout kiosks.
When you step inside, a guard reminds you there are no grocers. Nobody restocks the shelves but, presumably for half a day, it
doesn't matter. On Sunday afternoons, in place of a bored-looking person wearing a store uniform and overseeing the robo-checkout
kiosks sits a bored-looking person wearing a security guard uniform doing the same. There are no human-assisted checkout lanes open
but this store seldom has more than one operating anyway.
I have no idea how long the French government will allow this loophole to continue. I thought it might attract yellow vest protestors
or at least a cranky store worker – maybe a few locals annoyed at an ancient tradition being buried – but there was nobody complaining.
There were hardly any customers, either.
The use of robots to sidestep labor law and replace people, in one of the most labor-friendly countries in the world, produced
a big yawn.
Paul Krugman and
Matt Stoller argue convincingly that
it's the bosses, not the robots, that crush the spirits and souls of workers. Krugman calls it "automation obsession" and Stoller
points out predictions of robo-Armageddon have existed for decades. The well over
100+ examples I have of major
automation-tech ultimately led to more jobs, not fewer.
Jerry Yang envisions some type of forthcoming automation-induced dystopia. Zuck and the tech-bros argue for a forthcoming Star
Trek style robo-utopia.
My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks
to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in
my future are more likely to look more like cash registers and less like Terminators.
It's an admittedly blander vision of the future; neither utopian nor dystopian, at least not one fueled by automation tech. However,
it's a vision supported by the historic adoption of automation technology.
I have no real disagreement with a lot of automation. But how it is done is another matter altogether. Using the main example
in this article, Australia is probably like a lot of countries with bread in that most of the loaves that you get in a supermarket
are typically bland and come in plastic bags but which are cheap. You only really know what you grow up with.
When I first went to Germany I stepped into a Bakerie and it was a revelation. There were dozens of different sorts and types
of bread on display with flavours that I had never experienced. I didn't know whether to order a loaf or to go for my camera instead.
And that is the point. Too much automation is really all about narrowing the choices in your life and making it cheaper instead
of enabling a richer lifestyle.
We are all familiar with crapification and I contend that it is automation that enables this to become a thing.
"I contend that it is automation that enables this to become a thing."
As does electricity. And math. Automation doesn't necessarily narrow choices; economies of scale and the profit motive do.
What I find annoying (as in pollyannish) is the avoidance of the issue of those that cannot operate the machinery, those that
cannot open their own store, etc.
I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade
ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by
the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything
else?". Again, "Not my problem!". And that is San Josie in a nutshell.
A capitalist market that fails to account for the cost of a product's negative externalities is underpricing (and incentivizing
more of the same). It's cheating (or sanctioned cheating due to ignorance and corruption). It is not capitalism (unless that is
the only reasonable outcome of capitalism).
The author's vision of "appropriate tech" local enterprise supported by relatively simple automation is also my answer to the
vexing question of "how do I cope with automation?"
In a recent posting here at NC, I said the way to cope with automation of your job(s) is to get good at automation. My remark
caused a howl of outrage: "most people can't do automation! Your solution is unrealistic for the masses. Dismissed with prejudice!".
Thank you for that outrage, as it provides a wonder foil for this article. The article shows a small business which learned
to re-design business processes, acquire machines that reduce costs. It's a good example of someone that "got good at automation". Instead of being the victim of automation, these people adapted. They bought automation, took control of it, and operated it
for their own benefit.
Key point: this entrepreneur is now harvesting the benefits of automation, rather than being systematically marginalized by
it. Another noteworthy aspect of this article is that local-scale "appropriate" automation serves to reduce the scale advantages
of the big players. The availability of small-scale machines that enable efficiencies comparable to the big guys is a huge problem.
Most of the machines made for small-scale operators like this are manufactured in China, or India or Iran or Russia, Italy where
industrial consolidation (scale) hasn't squashed the little players yet.
Suppose you're a grain farmer, but only have 50 acres (not 100s or 1000s like the big guys). You need a combine – that's a
big machine that cuts grain stalk and separate grain from stalk (threshing). This cut/thresh function is terribly labor intensive,
the combine is a must-have. Right now, there is no small-size ($50K or less) combine manufactured in the U.S., to my knowledge.
They cost upwards of $200K, and sometimes a great deal more. The 50-acre farmer can't afford $200K (plus maint costs), and therefore
can't farm at that scale, and has to sell out.
So, the design, production, and sales of these sort of small-scale, high-productivity machines is what is needed to re-distribute
production (organically, not by revolution, thanks) back into the hands of the middle class.
If we make possible for the middle class to capture the benefits of automation, and you solve 1) the social dilemmas of concentration
of wealth, 2) the declining std of living of the mid- and lower-class, and 3) have a chance to re-design an economy (business
processes and collaborating suppliers to deliver end-user product/service) that actually fixes the planet as we make our living,
instead of degrading it at every ka-ching of the cash register.
Point 3 is the most important, and this isn't the time or place to expand on that, but I hope others might consider it a bit.
Regarding the combine, I have seen them operating on small-sized lands for the last 50 years. Without exception, you have one
guy (sometimes a farmer, often not) who has this kind of harvester, works 24h a day for a week or something, harvesting for all
farmers in the neighborhood, and then moves to the next crop (eg corn). Wintertime is used for maintenance. So that one person/farm/company
specializes in these services, and everybody gets along well.
Marcel – great solution to the problem. Choosing the right supplier (using combine service instead of buying a dedicated combine)
is a great skill to develop. On the flip side, the fellow that provides that combine service probably makes a decent side-income
from it. Choosing the right service to provide is another good skill to develop.
One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for
a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good
then the government might need to be more active.
What would happen if efficiency gains continued to be distributed according to the market? According to the relative bargaining
power of the market participants where one side, the public good as represented by government, is asking for and therefore getting
almost nothing?
As is, I do believe that people who are concerned do have reason to be concerned.
"Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a
richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine.
Some people make a living saying these sorts of things about automation. The quality of French bread is simply not what it
used to be (at least harder to find) though that is a complicated subject having to do with flour and wheat as well as human preparation
and many other things and the cost (in terms of purchasing power), in my opinion, has gone up, not down since the 70's.
As some might say, "It's complicated," but automation does (not sure about "has to") come with trade offs in quality while
price remains closer to what an ever more sophisticated set of algorithms say can be "gotten away with."
This may be totally different for cars or other things, but the author chose French bread and the only overall improvement,
or even non change, in quality there has come, if at all, from the dark art of marketing magicians.
/ from the dark art of marketing magicians, AND people's innate ability to accept/be unaware of decreases in quality/quantity
if they are implemented over time in small enough steps.
You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade
and know many elderly French. The bread, from the right bakeries, remains great. But you're unlikely to find it where tourists
might wander: the rent is too high.
As a general rule, if the bakers have a large staff or speak English you're probably in the wrong bakery. Except for one of
my favorites where she learned her English watching every episode of Friends multiple times and likes to practice with me, though
that's more of a fluke.
It's a difficult subject to argue. I suspect that comparatively speaking, French bread remains good and there are still bakers
who make high quality bread (given what they have to work with). My experience when talking to family in France (not Paris) is
that indeed, they are in general quite happy with the quality of bread and each seems to know a bakery where they can still get
that "je ne sais quoi" that makes it so special.
I, on the other hand, who have only been there once every few years since the 70's, kind of like once every so many frames
of the movie, see a lowering of quality in general in France and of flour and bread in particular though I'll grant it's quite
gradual.
The French love food and were among the best farmers in the world in the 1930s and have made a point of resisting radical change
at any given point in time when it comes to the things they love (wine, cheese, bread, etc.) , so they have a long way to fall,
and are doing so slowly; but gradually, it's happening.
I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't
think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and
globalization.
I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering
investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested
in getting a lot bigger – which would be the point of the investment, from the investors' POV.
I thought, but I don't think I said very articulately, that of course, they thought of themselves as craftspeople – making
people's food, after all. It was a fundamental culture clash. All that was 50 years ago; looks like the European attitude has
been receding.
Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources,
on as large a scale as possible. More value with less consumption. But how we get there from here is another question.
I have been touring around by car and was surprised to see that all Oregon gas stations are full serve with no self serve allowed
(I vaguely remember Oregon Charles talking about this). It applies to every station including the ones with a couple of dozen
pumps like we see back east. I have since been told that this system has been in place for years.
It's hard to see how this is more efficient and in fact just the opposite as there are fewer attendants than waiting customers
and at a couple of stations the action seemed chaotic. Gas is also more expensive although nothing could be more expensive than
California gas (over $5/gal occasionally spotted). It's also unclear how this system was preserved–perhaps out of fire safety
concerns–but it seems unlikely that any other state will want to imitate just as those bakeries aren't going to bring back their
wood fired ovens.
I think NJ is still required to do all full-serve gas stations. Most in MA have only self-serve, but there's a few towns that
have by-laws requiring full-serve.
In the 1980s when self-serve gas started being implemented, NIOSH scientists said oh no, now 'everyone' will be increasingly
exposed to benzene while filling up. Benzene is close to various radioactive elements in causing damage and cancer.
It was preserved by a series of referenda; turns out it's a 3rd rail here, like the sales tax. The motive was explicitly to
preserve entry-level jobs while allowing drivers to keep the gas off their hands. And we like the more personal quality.
Also, we go to states that allow self-serve and observe that the gas isn't any cheaper. It's mainly the tax that sets the price,
and location.
There are several bakeries in this area with wood-fired ovens. They charge a premium, of course. One we love is way out in
the country, in Falls City. It's a reason to go there.
Unless I misunderstood, the author of this article seems to equate mechanization/automation of nearly any type with robotics.
"Is the cash register really a robot? James Ritty, who invented it, didn't think so;" – Nor do I.
To me, "robot" implies a machine with a high degree of autonomy. Would the author consider an old fashioned manual typewriter
or adding machine (remember those?) to be robotic? How about when those machines became electrified?
I think the author uses the term "robot" over broadly.
Agree. Those are just electrified extensions of the lever or sand timer.
It's the "thinking" that is A.I.
Refuse to allow A.I.to destroy jobs and cheapen our standard of living.
Never interact with a robo call, just hang up.
Never log into a website when there is a human alternative.
Refuse to do business with companies that have no human alternative.
Never join a medical "portal" of any kind, demand to talk to medical personnel. Etc.
Sabotage A.I. whenever possible. The Ten Commandments do not apply to corporations.
During a Chicago hotel stay my wife ordered an extra bath towel from the front desk. About 5 minutes later, a mini version
of R2D2 rolled up to her door with towel in tow. It was really cute and interacted with her in a human-like way. Cute but really
scary in the way that you indicate in your comment.
It seems many low wage activities would be in immediate risk of replacement.
But sabotage? I would never encourage sabotage; in fact, when it comes to true robots like this one, I would highly discourage
any of the following: yanking its recharge cord in the middle of the night, zapping it with a car battery, lift its payload and
replace with something else, give it a hip high-five to help it calibrate its balance, and of course, the good old kick'm in the
bolts.
Stop and Shop supermarket chain now has robots in the store. According to Stop and Shop they are oh so innocent! and friendly!
why don't you just go up and say hello?
All the robots do, they say, go around scanning the shelves looking for: shelf price tags that don't match the current price,
merchandise in the wrong place (that cereal box you picked up in the breakfast aisle and decided, in the laundry aisle, that you
didn't want and put the box on a shelf with detergent.) All the robots do is notify management of wrong prices and misplaced merchandise.
The damn robot is cute, perky lit up eyes and a smile – so why does it remind me of the Stepford Wives.
S&S is the closest supermarket near me, so I go there when I need something in a hurry, but the bulk of my shopping is now
done elsewhere. Thank goodness there are some stores that are not doing this: The area Shoprites and FoodTown's don't – and they
are all run by family businesses. Shoprite succeeds by have a large assortment brands in every grocery category and keeping prices
really competitive. FoodTown operates at a higher price and quality level with real butcher and seafood counters as well as prepackaged
assortments in open cases and a cooked food counter of the most excellent quality with the store's cooks behind the counter to
serve you and answer questions. You never have to come home from work tired and hungry and know that you just don't want to cook
and settle for a power bar.
A robot is a machine -- especially one programmable by a computer -- capable of carrying out a complex series of actions
automatically. Robots can be guided by an external control device or the control may be embedded
Those early cash registers were perhaps an early form of analog computer. But Wiki reminds that the origin of the term is a
work of fiction.
The term comes from a Czech word, robota, meaning "forced labor";the word 'robot' was first used to denote a fictional humanoid
in a 1920 play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by the Czech writer, Karel Čapek
Perhaps I didn't qualify "autonomous" properly. I didn't mean to imply a 'Rosie the Robot' level of autonomy but the ability
of a machine to perform its programmed task without human intervention (other than switching on/off or maintenance & adjustments).
If viewed this way, an adding machine or typewriter are not robots because they require constant manual input in order to function
– if you don't push the keys, nothing happens. A computer printer might be considered robotic because it can be programmed to
function somewhat autonomously (as in print 'x' number of copies of this document).
"Robotics" is a subset of mechanized/automated functions.
When I first got out of grad school I worked at United Technologies Research Center where I worked in the robotics lab. In
general, at least in those days, we made a distinction between robotics and hard automation. A robot is programmable to do multiple
tasks and hard automation is limited to a single task unless retooled. The machines the author is talking about are hard automation.
We had ASEA robots that could be programmed to do various things. One of ours drilled, riveted and sealed the skin on the horizontal
stabilators (the wing on the tail of a helicopter that controls pitch) of a Sikorsky Sea Hawk.
The same robot with just a change
of the fixture on the end could be programmed to paint a car or weld a seam on equipment. The drilling and riveting robot was
capable of modifying where the rivets were placed (in the robot's frame of reference) based on the location of precisely milled
blocks build into the fixture that held the stabilator.
There was always some variation and it was important to precisely place
the rivets because the spars were very narrow (weight at the tail is bad because of the lever arm). It was considered state of
the art back in the day but now auto companies have far more sophisticated robotics.
But what happens when the bread machine is connected to the internet, can't function without an active internet connection,
and requires an annual subscription to use?
That is the issue to me: however we define the tools, who will own them?
You know, that is quite a good point that. It is not so much the automation that is the threat as the rent-seeking that anything
connected to the internet allows to be implemented.
Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software
can't overcome basic hardware limits.
The story about automation not worsening the quality of bread is not exactly true. Bakers had to develop and incorporate a
new method called autolyze (
https://www.kingarthurflour.com/blog/2017/09/29/using-the-autolyse-method
) in the mid-20th-century to bring back some of the flavor lost with modern baking. There is also a trend of a new generation
of bakeries that use natural yeast, hand shaping and kneading to get better flavors and quality bread.
But it is certainly true that much of the automation gives almost as good quality for much lower labor costs.
When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals
and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under
that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot".
How you implement a, b, and c above can have more or less sophistication, depending upon the complexity, variability, etc.
of the environment, or the solutions, or the means used to affect the environment.
A machine, like a typewriter, or a lawn-mower engine has the logic expressed in metal; it's static.
The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's
computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility.
It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate.
And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and
some nefarious intent, then you get into trouble.
Sometimes automation is necessary to eliminate the risks of manual processes. There are parenteral (injectable) drugs that
cannot be sterilized except by filtration. Most of the work of filling, post filling processing, and sealing is done using automation
in areas that make surgical suites seem filthy and people are kept from these operations.
Manual operations are only undertaken to correct issues with the automation and the procedures are tested to ensure that they
do not introduce contamination, microbial or otherwise. Because even one non-sterile unit is a failure and testing is destructive
process, of course any full lot of product cannot be tested to state that all units are sterile. Periodic testing of the automated
process and manual intervention is done periodically and it is expensive and time consuming to test to a level of confidence that
there is far less than a one in a million chance of any unit in a lot being non sterile.
In that respect, automation and the skills necessary to interface with it are fundamental to the safety of drugs frequently
used on already compromised patients.
Agree. Good example. Digital technology and miniaturization seem particularly well suited to many aspect of the medical world.
But doubt they will eliminate the doctor or the nurse very soon. Insurance companies on the other hand
"There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase
in cost." WRONG! If you had an unlimited supply of 50-cents-an-hour disposable labor, mixing and kneading the dough by hand would
be cheaper. It is only because labor is expensive in France that the machine saves money.
In Japan there is a lot of automation, and wages and living standards are high. In Bangladesh there is very little automation,
and wages and livings standards are very low.
Are we done with the 'automation is destroying jobs' meme yet? Excessive population growth is the problem, not robots. And
the root cause of excessive population growth is the corporate-sponsored virtual taboo of talking about it seriously.
Posted by: juliania | Feb 12 2020 5:15 utc | 39
(Artificial Intelligence)
The trouble with Artificial Intelligence is that it's not intelligent.
And it's not intelligent because it's got no experience, no imagination and no
self-control.
vk @38: "...the reality on the field is that capitalism is 0 for 5..."
True, but it is worse than that! Even when we get AI to the level you describe, capitalism
will continue its decline.
Henry Ford actually understood Marxist analysis. Despite what many people in the present
imagine, Ford had access to sufficient engineering talent to make his automobile
manufacturing processes much more automated than he did. Ford understood that improving the
efficiency of the manufacturing process was less important than creating a population with
sufficient income to purchase his products.
AI is just a tool, unless it is developed to the point of attaining sentience in which
case it becomes slavery, but let's ignore that possibility for now. Capitalists cannot make
profits from the tools they own all by the tools themselves. Profits come from unpaid labor.
You cannot underpay a tool, and the tool cannot labor by itself.
The AI can be a product that is sold, but compared with cars, for example, the quantity of
labor invested in AI is minuscule. The smaller the proportion of labor that is in the cost of
a product, the smaller the percent of the price that can be realized as profit. To re-boost
real capitalist profits you need labor-intensive products. This also ties in with Henry
Ford's understanding of economics in that a larger labor force also means a larger market for
the capitalist's products.
There are some very obvious products that I can think of involving AI that are also
massively labor-intensive that would match the scale of the automotive industry and
rejuvenate capitalism, but they would require many $millions in R&D to make them
market-ready. Since I want capitalism to die already and get out Re: AI --
Always wondered how pseudo-AI, or enhanced automation, might be constrained by diminishing
EROEI.
Unless an actual AI were able to crack the water molecule to release hydrogen in an
energy-efficient way, or unless we learn to love nuclear (by cracking the nuclear waste
issue), then it seems to me hyper-automated workplaces will be at least as subject to
plummeting EROEI as are current workplaces, if not moreso. Is there any reason to think that,
including embedded energy in their manufacture, these machines and their workplaces will be
less energy intensive than current ones?
@William Gruff #40
The real world usage of AI, to date, is primarily replacing the rank and file of human
experience.
Where before you would have individuals who have attained expertise in an area, and who would
be paid to exercise it, now AI can learn from the extant work and repeat it.
The problem, though, is that AI is eminently vulnerable to attack. In particular - if the
area involves change, which most do, then the AI must be periodically retrained to take into
account the differences. Being fundamentally stupid, AI literally cannot integrate new data
on top of old but must start from scratch.
I don't have the link, but I did see an excellent example: a cat vs. AI.
While a cat can't play chess, the cat can walk, can recognize objects visually, can
communicate even without a vocal cord, can interact with its environment and even learn new
behaviors.
In this example, you can see one of the fundamental differences between functional organisms
and AI: AI can be trained to perform extremely well, but it requires very narrow focus.
IBM spend years and literally tens of thousands of engineering hours to create the AI that
could beat Jeapordy champions - but that particular creation is still largely useless for
anything else. IBM is desperately attempting to monetize that investment through its Think
Build Grow program - think AWS for AI. I saw a demo - it was particularly interesting because
this AI program ingests some 3 million English language web articles; IBM showed its contents
via a very cool looking wrap around display room in its Think Build Grow promotion
campaign.
What was really amusing was a couple of things:
1) the fact that the data was already corrupt: this demo was about 2 months ago - and there
were spikes of "data" coming from Ecuador and the tip of South America. Ecuador doesn't speak
English. I don't even know if there are any English web or print publications there. But I'd
bet large sums of money that the (English) Twitter campaign being run on behalf of the coup
was responsible for this spike.
2) Among the top 30 topics was Donald Trump. Given the type of audience you would expect
for this subject, it was enormously embarrassing that Trump coverage was assessed as net
positive - so much so that the IBM representative dived into the data to ascertain why the AI
had a net positive rating (the program also does sentiment analysis). It turns out that a
couple of articles which were clearly extremely peripheral to Trump, but which did mention
his name, were the cause. The net positive rating was from this handful of articles even
though the relationship was very weak and there were far fewer net "positive" vs. negative
articles shown in the first couple passes of source articles (again, IBM's sentiment analysis
- not a human's).
I have other examples: SF is home to a host of self-driving testing initiatives. Uber had
a lot about 4 blocks from where I live, for months, where they based their self driving cars
out of (since moved). The self-driving delivery robots (sidewalk) - I've seen them tested
here as well.
Some examples of how they fail: I was riding a bus, which was stopped at an intersection
behind a Drive test vehicle at a red light(Drive is nVidia's self driving). This intersection
is somewhat unusual: there are 5 entrances/exits to this intersection, so the traffic light
sequence and the driving action is definitely atypical.
The light turns green, the Drive car wants to turn immediately left (as opposed to 2nd
left, as opposed to straight or right). It accelerates into the intersection and starts
turning; literally halfway into the intersection, it slams on its brakes. The bus, which was
accelerating behind it in order to go straight, is forced to also slam on its brakes. There
was no incoming car - because of the complex left turn setup, the street the Drive car and
bus were on, is the only one that is allowed to traverse when that light is green (initially.
After a 30 second? pause, the opposite "straight" street is allowed to drive).
Why did the Drive car slam on its brakes in the middle of the intersection? No way to know
for sure, but I would bet money that the sensors saw the cars waiting at the 2nd left street
and thought it was going the wrong way. Note this is just a few months ago.
There are many other examples of AI being fundamentally brittle: Google's first version of
human recognition via machine vision classified black people as gorillas:
Google Photos fail
A project at MIT inserted code into AI machine vision programs to show what these were
actually seeing when recognizing objects; it turns out that what the AIs were recognizing
were radically different from reality. For example, while the algo could recognize a
dumbbell, it turns out that the reference image that the algo used was a dumbbell plus an
arm. Because all of the training photos for a dumbbell included an arm...
This fundamental lack of basic concepts, a coherent worldview or any other type of rooting
in reality is why AI is also pathetically easy to fool. This research showed that the top of
the line machine vision for self driving could be tricked into recognizing stop signs as
speed limit signs Confusing self driving
cars
To be clear, fundamentally it doesn't matter for most applications if the AI is "close
enough". If a company can replace 90% of its expensive, older workers or first world, English
speaking workers with an AI - even if the AI is working only 75% of the time, it is still a
huge win. For example: I was told by a person selling chatbots to Sprint that 90% of Sprint's
customer inquiries were one of 10 questions...
And lastly: are robots/AI taking jobs? Certainly it is true anecdotally, but the overall
economic statistics aren't showing this. In particular, if AI was really taking jobs - then
we should be seeing productivity numbers increase more than in the past. But this isn't
happening:
Productivity for the past 30 years
Note in the graph that productivity was increasing much more up until 2010 - when it leveled
off.
Dean Baker has written about this extensively - it is absolutely clear that it is outsourced
of manufacturing jobs which is why US incomes have been stagnant for decades.
The world is filled with conformism and groupthink. Most people do not wish to think for
themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by
the herd of one's peers.
The profession of arms, the intelligence business, the civil service bureaucracy, the
wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of
the thinktanks are all rotten with this sickness, an illness which leads inevitably to
stereotyped and unrealistic thinking, thinking that does not reflect reality.
The worst locus of this mentally crippling phenomenon is the world of the academics. I have
served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright
Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long
time. Those are two examples of my exposure to the individual and collective academic
minds.
As a class of people I find them unimpressive. The credentialing exercise in acquiring a
doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego
support as well as an entrance ticket for various hierarchies, among them the world of the
academy. The process of degree acquisition itself requires sponsorship by esteemed academics
who recommend candidates who do not stray very far from the corpus of known work in whichever
narrow field is involved. The endorsements from RESPECTED academics are often decisive in the
award of grants.
This process is continued throughout a career in academic research. PEER REVIEW is the
sine qua non for acceptance of a "paper," invitation to career making conferences, or
to the Holy of Holies, TENURE.
This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their
fellows in a search for the "Good Doggy" moments that make up their lives. These people are for
sale. Their price may not be money, but they are still for sale. They want to be accepted as
members of their group. Dissent leads to expulsion or effective rejection from the group.
This mentality renders doubtful any assertion that a large group of academics supports any
stated conclusion. As a species academics will say or do anything to be included in their
caste.
This makes them inherently dangerous. They will support any party or parties, of any
political inclination if that group has the money, and the potential or actual power to
maintain the academics as a tribe. pl
That is the nature of tribes and humans are very tribal. At least most of them.
Fortunately, there are outliers. I was recently reading "Political Tribes" which was written
by a couple who are both law professors that examines this.
Take global warming (aka the rebranded climate change). Good luck getting grants to do any
skeptical research. This highly complex subject which posits human impact is a perfect
example of tribal bias.
My success in the private sector comes from consistent questioning what I wanted to be
true to prevent suboptimal design decisions.
I also instinctively dislike groups that have some idealized view of "What is to be
done?"
As Groucho said: "I refuse to join any club that would have me as a member"
The 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand
conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at
play, those outside their group think are persecuted, ostracized, jailed, and executed all
because they defy their conformity demands, and defy allegiance to them.
One world, one religion, one government, one Borg. all lead down the same road to --
Orwell's 1984.
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam,
when the best and the brightest were serving as presidential advisors.)
Also good Halberstam re-read: The Powers that Be - when the conservative media controlled
the levers of power; not the uber-liberal one we experience today.
"... In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm. ..."
"... In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules ..."
"... I get tired of the purists who think that OO is the only possible answer. The world is not a nail. ..."
OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the
convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that
Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've
noticed the downsides quite a while ago.
Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular
abstraction craze that Java was/is.
Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They
know OOP, so they think that every problem must be solved in an OOP way.
In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows
naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best
paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules.
For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What
"object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative,
static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO
is the only possible answer. The world is not a nail.
Object-oriented programming generates a lot of what looks like work. Back in the days of
fanfold, there was a type of programmer who would only put five or ten lines of code on a
page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming
is like crack for these people: it lets you incorporate all this scaffolding right into your
source code. Something that a Lisp hacker might handle by pushing a symbol onto a list
becomes a whole file of classes and methods. So it is a good tool if you want to convince
yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he
calls object happiness .
What I sometimes see when I interview people and review code is symptoms of a disease I call
Object Happiness. Object Happy people feel the need to apply principles of OO design to
small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual
abstract base classes -- writing programs where IFoos talk to IBars but there is only one
implementation of each interface! I suspect that early exposure to OO design principles
divorced from any practical context that motivates those principles leads to object
happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production
applications. Not that object oriented programming is inherently bad, mind you, but a little
OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a
little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's
better to err on the side of simplicity, and I tend to favor the approach that results in
less code, not more .
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick
is a French software developer. Evidently the acronym for object oriented programming is
spelled a little differently in French than it is in English: POO.
That's exactly what I've imagined when I had to work on code that abused objects.
But POO code can have another, more constructive, meaning. This blog author argues that OOP
pales in importance to POO. Programming fOr Others , that
is.
The problem is that programmers are taught all about how to write OO code, and how doing so
will improve the maintainability of their code. And by "taught", I don't just mean "taken a
class or two". I mean: have pounded into head in school, spend years as a professional being
mentored by senior OO "architects" and only then finally kind of understand how to use
properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if
it had amazing features. The hype is that major.
So what, then, about all that code programmers write before their 10 years OO
apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply
other techniques than OO. These techniques are out there but aren't as widely discussed.
The improvement [I propose] has little to do with any specific programming technique. It's
more a matter of empathy; in this case, empathy for the programmer who might have to use your
code. The author of this code actually thought through what kinds of mistakes another
programmer might make, and strove to make the computer tell the programmer what they did
wrong.
In my experience the best code, like the best user interfaces, seems to magically
anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO.
Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for
short.
The principles of object oriented programming are far more important than mindlessly,
robotically instantiating objects everywhere:
Stop worrying so much about the objects. Concentrate on satisfying the principles of
object orientation rather than object-izing everything. And most of all, consider the poor
sap who will have to read and support this code after you're done with it . That's why POO
trumps OOP: programming as if people mattered will always be a more effective strategy than
satisfying the architecture astronauts .
Daniel
Korenblum , works at Bayes Impact
Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices
are on the rise, contributing to the relative decline of OOP.
First off, there are a few things about OOP that many people don't like, which makes them
interested in learning and using other approaches. Below are some references from the OOP wiki
article:
One of the comments therein linked a few other good wikipedia articles which also provide
relevant discussion on increasingly-popular alternatives to OOP:
Modularity and design-by-contract are better implemented by module systems ( Standard ML
)
Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger
engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It
is good for some applications, all kidding aside. The real question is not whether it's useful
or not, but for how many projects?
When I'm done building an OOP application, it's like a large and elaborate structure.
Changing the way objects are connected and organized can be hard, and the design choices of the
past tend to become "frozen" or locked in place for all future times. Is this the best choice
for every application? Probably not.
If you want to drive 500-5000 miles a week in a car that you can fix yourself without
special ordering any parts, it's probably better to go with a Honda or something more easily
adaptable than an antique vehicle-with-fins.
Finally, the best example is the growth of JavaScript as a language (officially called
EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming
language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream
language to promote the use of functional programming concepts such as higher-order functions,
currying, and monads.
The recent growth of the JS/ES open-source community has not only been impressive in its
extent but also unexpected from the standpoint of many established programmers. This is partly
evidenced by the overwhelming number of active repositories on Github using
JavaScript/EcmaScript:
Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur
the line dividing them in our minds. This is a division that many other languages impose -
"there are functions and there are objects/variables, and they are different".
This seemingly minor (and often confusing) design choice enables a lot of flexibility and
power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth
between 2005-2015.
This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP
had become a "standard" or "fixed" way of doing things for a while, and there will probably
always be a time and place for OOP. But as programmers we should avoid getting too stuck in one
way of thinking / doing things, because different applications may require different
approaches.
Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be
custom-tailoring our designs by skillfully choosing the most appropriate programming
paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck"
that our software provides.
Although this is something most engineers can agree on, we still have a long way to go until
we reach some sort of consensus about how best to teach and hone these skills. This is not only
a challenge for us as programmers today, but also a huge opportunity for the next generation of
educators to create better guidelines and best practices than the current OOP-centric
pedagogical system.
Here are a couple of good books that elaborates on these ideas and techniques in more
detail. They are free-to-read online:
Mike MacHenry ,
software engineer, improv comedian, maker Answered
Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase
itself was over hyped to an extrodinary degree. Then as is common with over hyped things many
other things took on that phrase as a name. Then people got confused and stopped calling what
they are don't OOP.
Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated
about the topic.
It's like, artificial intelligence, now that I think about it. There aren't many people
these days that say they do AI to anyone but the laymen. They would say they do machine
learning or natural language processing or something else. These are fields that the vastly
over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a
sharp decline while these very concrete fields continued to flourish.
There is nothing inherently wrong with some of the functionality it offers, its the way
OOP is abused as a substitute for basic good programming practices.
I was helping interns - students from a local CC - deal with idiotic assignments like
making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES.
Seriously, WTF?
A room full of career programmers could not even figure out how you were supposed to do
that, much less why.
What was worse was a lack of understanding of basic programming skill or even the use of
variables, as the kids were being taught EVERY program was to to be assembled solely by
sticking together bits of libraries.
There was no coding, just hunting for snippets of preexisting code to glue together. Zero
idea they could add their own, much less how to do it. OOP isn't the problem, its the idea
that it replaces basic programming skills and best practice.
That and the obsession with absofrackinglutely EVERYTHING just having to be a formally
declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much. Generally, I find that my
most readable and maintainable code turns out to be a procedural program that manipulates
objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is
demonstrating a particular idea, but production code is supposed to be there to do work, not
grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code
and talking about code, but they shouldn't be the end of the discussion. They work better as
a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never
learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
Inheritance, while not "inherently" bad, is often the wrong solution. See: Why
extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny
little anecdote in Cocoa
Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and
Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the
peak of its mindshare, I met one of its engineers at a trade show.
I asked him to create a simple application for me: A window would appear with a button,
and when the button was clicked, the words 'Hello, World!' would appear in a text field. The
engineer created a project and started subclassing madly: subclassing the window and the
button and the event handler.
Then he started generating code: dozens of lines to get the button and the text field onto
the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew
that the company was doomed. A couple of years later, Taligent quietly closed its doors
forever."
Almost every programming methodology can be abused by people who really don't know how to
program well, or who don't want to. They'll happily create frameworks, implement new
development processes, and chart tons of metrics, all while avoiding the work of getting the
job done. In some cases the person who writes the most code is the same one who gets the
least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and
has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially
an object oriented system. It's just data encapsulation and separating work into manageable
modules. That's how it was before anyone ever came up with the dumb name "full-stack
developer".
(medium.com)
782
Posted by EditorDavid
on Monday July 22, 2019 @12:04AM
from the
OOPs
dept.
Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling
object-oriented programming "a trillion dollar disaster."
Precious time and brainpower are being spent
thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented
Programming (OOP) has been created with one goal in mind -- to
manage the complexity
of procedural
codebases. In other words, it was supposed to
improve code organization
. There's
no objective and open evidence that OOP is better than plain procedural programming
...
Instead of reducing
complexity, it encourages promiscuous
sharing of mutable state
and introduces additional complexity
with its numerous
design patterns
. OOP makes common development practices, like refactoring and
testing, needlessly hard...
As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the
rise of OOP with some curiosity. I think there's a general consensus that abstraction and
re-usability are good things - they're the reason subroutines exist - the issue is whether
they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in
software, but there seems to be a great deal of pressure these days to make your
implementation fit some pre-defined template rather than thinking about the application's
specific needs for state and concurrency. I have seen some rather eccentric consequences of
"patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a
specific task and to make that specific task available in a wide variety of contexts by
judicious use of templating and virtual functions that obviate the need for
"refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any
other kind of code. However, I think the key factor is not the choice of programming
paradigm, but the design process.
You need to think first about what your code is intended to do and in what circumstances
it might be reused. In the context of a larger project, it means identifying commonalities
and deciding how best to implement them once. You need to document that design and review it
with other interested parties. You need to document the code with clear information about its
valid and invalid use. If you've done that, testing should not be a problem.
Some people seem to believe that OOP removes the need for some of that design and
documentation. It doesn't and indeed code that you intend to be reused needs *more* design
and documentation than the glue that binds it together in any one specific use case. I'm
still a firm believer that coding begins with a pencil, not with a keyboard. That's
particularly true if you intend to design abstract interfaces that will serve many purposes.
In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the
costs - and that usually means you not only know your code will be genuinely reusable but
will also genuinely be reused.
I struggle with the whole concept of "design patterns".
Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented
even when the code isn't documented, but will struggle immensely if it were the other way
around.
Bad programmers create objects for objects sake, and because of that they have to follow
so called "design patterns" because no amount of code commenting makes the code easily
understandable when its a spaghetti web of interacting "objects" The "design patterns" don't
make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the
index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great code
treats objects like data structures with attached subroutines, not as "objects", and document
the fuck out of the contained data, while more or less letting the code document itself.
680,303 lines of Java code in the main project in my system.
Probably would've been more like 100,000 lines if you had used a language whose ecosystem
doesn't goad people into writing so many superfluous layers of indirection, abstraction and
boilerplate.
Posted on 2017-12-18 by
esr In recent discussion on this
blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C,
forget it – reposurgeon wouldn't have happened at all"
I should be more specific about this, since I think the underlying problem is general to a
great deal more that the implementation of reposurgeon. It ties back to a lot of recent
discussion here of C, Python, Go, and the transition to a post-C world that I think I see
happening in systems programming.
I shall start by urging that you must take me seriously when I speak of C's limitations.
I've been programming in C for 35 years. Some of my oldest C code is still in wide
production use. Speaking from that experience, I say there are some things only a damn fool
tries to do in C, or in any other language without automatic memory management (AMM, for the
rest of this article).
This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program
contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common
Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not
about C or Fortran or Common Lisp. His maxim could be generalized in a
Henry-Spencer-does-Santyana style as this:
"At any sufficient scale, those who do not have automatic memory management in their
language are condemned to reinvent it, poorly."
In other words, there's a complexity threshold above which lack of AMM becomes intolerable.
Lack of it either makes expressive programming in your application domain impossible or sends
your defect rate skyrocketing, or both. Usually both.
When you hit that point in a language like C (or C++), your way out is usually to write an
ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM
layer, poorly. Hello, Greenspun's Law!
It's not particularly the line count of your source code driving this, but rather the
complexity of the data structures it uses internally; I'll call this its "greenspunity". Large
programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc
AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything
that has to do – for example – graph theory is doomed to need one (why, hello,
there, reposurgeon!)
There's a trap waiting here. As the greenspunity rises, you are likely to find that more and
more of your effort and defect chasing is related to the AMM layer, and proportionally less
goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by
issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of
even low greenspunity.
Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually
you get pinned to this situation because real AMM would impose latency costs you can't afford.
The major case of this is operating-system kernels. I could say a lot more about the costs and
contortions this forces you to assume, and perhaps I will in a future post, but it's out of
scope for this one.
On the other hand, reposurgeon is representative of a very large class of "systems" programs
that don't have these tight latency constraints. Before I get to back to the implications of
not being latency constrained, one last thing – the most important thing – about
escalating AMM-layer complexity.
At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc
AMM layer becomes a black hole. You can't actually make any progress on the application domain
at all – when you try it's like being nibbled to death by ducks.
Now consider this prospectively, from the point of view of someone like me who has architect
skill. A lot of that skill is being pretty good at visualizing the data flows and structures
– and thus estimating the greenspunity – implied by a problem domain. Before you've
written any code, that is.
If you see the world that way, possible projects will be divided into "Yes, can be done in a
language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't
nohow going there without AMM."
This is why I said that if I were restricted to C, reposurgeon would never have happened at
all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how
far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer
becomes a black hole. I shudder just thinking about it.
Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
But, though software is sometimes written by people who are exceptionally good at managing that
kind of hair, it then generally has to be maintained by people who are less so
The really smart people in my audience have already figured out that this is why Ken
Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.
Ken understands something large and simple. Software expands, not just in line count but in
greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are
approaching a point of singularity at which typical – not just worst-case –
greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap
nigh-indistinguishable from one.
Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here.
It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The
point is the combination of AMM with compiled-code speed is ceasing to be a luxury option;
increasingly it will be baseline for getting most kinds of systems work done at all.
Sociologically, this implies an interesting split. Historically the boundary between systems
work under hard latency constraints and systems work without it has been blurry and permeable.
People on both sides of it coded in C and skillsets were similar. People like me who mostly do
out-of-kernel systems work but have code in several different kernels were, if not common, at
least not odd outliers.
Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or
languages in its class. C – or non-AMM languages intended as C successors, like Rust
– will keep kernels and real-time firmware, at least for the foreseeable future.
Skillsets will diverge.
It'll be a more fragmented systems-programming world. Oh well; one does what one must, and
the tide of rising software complexity is not about to be turned. This entry was posted in
General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the
Generalized Greenspun Law"
David Collier-Brown on
2017-12-18 at
17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any
sufficiently complex program using overlays will eventually contain an implementation of
virtual memory. Reply ↓
esr on
2017-12-18 at
17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any
sufficiently complex program using overlays will eventually contain an implementation
of virtual memory.
Oh, neat. I think that's a closer approximation to the most general statement than
Greenspun's, actually. Reply ↓
Alex K. on 2017-12-20 at 09:50:37 said:
For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in
the late '90s. [I know this was around/just before the first C++ standard, maybe
contrasting it to this new upstart Java thing?] This was definitely during the era
where big computers still did your serious work, and pretty much all of it was in
either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being
an acronym, while Fortran ain't–but since I'm talking about an earlier epoch
of computing, I'm going to use the conventions of that era.]
Now the Object-Oriented paradigm has really mitigated this to an enormous
degree, but I seem to recall at that time the argument was that multimethod
dispatch (a benefit so great you happily accept the flaw of memory
management) was the Killer Feature of LISP.
Given the way the other advantage I would have given Lisp over the past two
decades–anonymous functions [lambdas] and treating them as first-class
values–are creeping into a more mainstream usage, I think automated memory
management is the last visible "Lispy" feature people will associate with
Greenspun. [What, are you now visualizing lisp macros? Perish the
thought–anytime I see a foot cannon that big, I stop calling it a feature ]
Reply
↓
Mycroft Jones on 2017-12-18 at 17:41:04 said: After
looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only
memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how
to apply it to a statically typed language though. Wish it was available for Go. ORO is
extremely predictable and repeatable, not stuttery. Reply ↓
lliamander on 2017-12-18 at 19:28:04 said: >
Not sure how to apply it to a statically typed language though.
Jeff Read on 2017-12-19 at 00:38:57 said: If
Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like
region-based memory allocation with a single region: values which leave the current
scope are copied which can be slow if you're passing large lists or vectors
around.
Linear Lisp is something quite a bit different, and allows for arbitrary data
structures with arbitrarily deep linking within, so long as there are no cycles in the
data structures. You can even pass references into and out of functions if you like;
what you can't do is alias them. As for statically typed programming languages well,
there are linear
type systems , which as lliamander mentioned are implemented in Clean.
Newlisp in general is smack in the middle between Rust and Urbit in terms of
cultishness of its community, and that scares me right off it. That and it doesn't
really bring anything to the table that couldn't be had by "old" lisps (and Lutz
frequently doubles down on mistakes in the design that had been discovered and
corrected decades ago by "old" Lisp implementers). Reply ↓
Gary E. Miller on 2017-12-18 at 18:02:10 said: For a
long time I've been holding out hope for a 'standard' garbage collector library for C. But
not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around
the tremendous difficulty in getting new stuff into C. Reply ↓
esr on
2017-12-18 at
18:40:53 said: >For a long time I've been holding out hope for a 'standard'
garbage collector library for C. But not gonna hold my breath.
Yeah, good idea not to. People as smart/skilled as you and me have been poking at
this problem since the 1980s and it's pretty easy to show that you can't do better than
Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh
Reply ↓
John
Cowan on 2018-04-15 at 00:11:56 said:
What's impractical about it? I replaced the native GC in the standard
implementation of the Joy interpreter with BDW, and it worked very well. Reply
↓
esr on 2018-04-15 at 08:30:12
said: >What's impractical about it? I replaced the native GC in the standard
implementation of the Joy interpreter with BDW, and it worked very well.
GCing data on the stack is a crapshoot. Pointers can get mistaken for data
and vice-versa. Reply
↓
Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I
think it's not about C. Let me cite a little bit from "The Go Programming Language"
(A.Donovan, B. Kernigan) --
in the section about Go influences, it states:
"Rob Pike and others began to experiment with CSP implementations as actual
languages. The first was called Squeak which provided a language with statically
created channels. This was followed by Newsqueak, which offered C-like statement and
expression syntax and Pascal-like type notation. It was a purely functional language
with garbage collection, again aimed at managing keyboard, mouse, and window events.
Channels became first-class values, dynamically created and storable in variables.
The Plan 9 operating system carried these ideas forward in a language called Alef.
Alef tried to make Newsqueak a viable system programming language, but its omission of
garbage collection made concurrency too painful."
So my takeaway was that AMM was key to get proper concurrency.
Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is
also a key to have concurrency made easy.
(Update: the ellipsises I put into the citation were eaten by the engine and won't
appear when I tried to re-edit my comment; sorry.) Reply ↓
tz on 2017-12-18 at 18:29:20 said: I think
this is the key insight.
There are programs with zero MM.
There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike
formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.
Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic
C, you need AMM. But it is easier in a language that includes it (and does it well).
Virtual Memory is related to AMM – too often the memory leaks were hidden (think
of your O(n**2) for small values of n) – small leaks that weren't visible under
ordinary circumstances.
Still, you aren't going to get AMM on the current Arduino variants. At least not
easily.
That is where the line is, how much resources. Because you require a medium to large OS,
or the equivalent resources to do AMM.
Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the
CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but
sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.
As tasks get bigger they need bigger machines. Reply ↓
Zygo on 2017-12-18 at 18:31:34 said: > Of
course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
I was about to say something about writing an AMM layer before breakfast on the way to
writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.
Reply ↓
esr on
2017-12-18 at
18:34:59 said: >I was about to say something about writing an AMM layer before
breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but
I guess you covered that.
Well, yeah. I have days like that occasionally, but it would be unwise to plan a
project based on the assumption that I will. And deeply foolish to assume that
J. Random Programmer will. Reply ↓
tz on 2017-12-18 at 18:32:37 said: C
displaced assembler because it had the speed and flexibility while being portable.
Go, or something like it will displace C where they can get just the right features into
the standard library including AMM/GC.
Maybe we need Garbage Collecting C. GCC?
One problem is you can't do the pointer aliasing if you have a GC (unless you also do
some auxillary bits which would be hard to maintain). void x = y; might be decodable but
there are deeper and more complex things a compiler can't detect. If the compiler gets it
wrong, you get a memory leak, or have to constrain the language to prevent things which
manipulate pointers when that is required or clearer. Reply ↓
Zygo on 2017-12-18 at 20:52:40 said: C++11
shared_ptr does handle the aliasing case. Each pointer object has two fields, one for
the thing being pointed to, and one for the thing's containing object (or its
associated GC metadata). A pointer alias assignment alters the former during the
assignment and copies the latter verbatim. The syntax is (as far as a C programmer
knows, after a few typedefs) identical to C.
The trouble with applying that idea to C is that the standard pointers don't have
space or time for the second field, and heap management isn't standardized at all
(free() is provided, but programs are not required to use it or any other function
exclusively for this purpose). Change either of those two things and the resulting
language becomes very different from C. Reply ↓
IGnatius T
Foobar on 2017-12-18 at 18:39:28 said: Eric, I
love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random
Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C
just fine. 95% of the use cases you describe for needing garbage collection are eliminated
with the simple addition of a string class which nearly everyone has in their toolkit.
Reply ↓
esr on
2017-12-18 at
18:55:46 said: >The world is getting along with C just fine. 95% of the use
cases you describe for needing garbage collection are eliminated with the simple
addition of a string class which nearly everyone has in their toolkit.
Even if you're right, the escalation of complexity means that what I'm facing now,
J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon
but a string class won't suffice for much longer even if it does today. Reply
↓
I don't solve complex problems.
I simplify complex problems and solve them.
Complexity does escalate, or at least in the sense that we could cross oceans a
few centuries ago, and can go to the planets and beyond today.
We shouldn't use a rocket ship to get groceries from the local market.
J Random H-1B will face some easily decomposed apparently complex problem and
write a pile of spaghetti.
The true nature of a hacker is not so much in being able to handle the most deep
and complex situations, but in being able to recognize which situations are truly
complex and in preference working hard to simplify and reduce complexity in
preference to writing something to handle the complexity. Dealing with a slain
dragon's corpse is easier than one that is live, annoyed, and immolating anything
within a few hundred yards. Some are capable of handling the latter. The wise
knight prefers to reduce the problem to the former. Reply
↓
William O. B'Livion on 2017-12-20 at 02:02:40
said: > J Random H-1B will face some easily decomposed
> apparently complex problem and write a pile of spaghetti.
J Random H-1B will do it with Informatica and Java. Reply
↓
One of the epic fails of C++ is it being sold as C but where anyone could program
because of all the safetys. Instead it created bloatware and the very memory leaks because
the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all
"automatic".
This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and
sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can
control, you must control". Having a really good AMM/GC requires you to respect what it can
and cannot do. OK, form a huge – into VM – linked list. Won't it just handle
everything? NO!. You have to think reference counts, at least in the back of your mind. It
simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but
you still can be hit.
Many will gloss over and either superficially learn (but can't apply) or ignore the "how
to use automatic memory management" in their CS course. Like they didn't bother with
pointers, recursion, or multithreading subtleties. Reply ↓
lliamander on 2017-12-18 at 19:36:35 said: I would
say that there is a parallel between concurrency models and memory management approaches.
Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to
implement a locks-based solution just as it is infeasible for Mr. Hacker to write a
solution with manual memory management.
My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go
will never be able to achieve the per-process (i.e. language-level process, not OS-level)
GC that would allow for really low latencies necessary for a AMM language to move closer
into the kernel space. But certainly insofar as many "systems" level applications don't
require extremely low latencies, Go will probably viable solution going forward. Reply
↓
Jeff Read on 2017-12-18 at 20:14:18 said: Putting
aside the hard deadlines found in real-time systems programming, it has been empirically
determined that a GC'd program requires five times as much memory as the
equivalent program with explicit memory management. Applications which are both CPU- and
RAM-intensive, where you need to have your performance cake and eat it in as little memory
as possible, are thus severely constrained in terms of viable languages they could be
implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust.
(C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming
flexibility as those two.)
I think your problems with reposturgeon stem from the fact that you're just running up
against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic
language like Python (even sped up with PyPy) can feasibly deliver on a hardware
configuration you can order from Amazon. For applications like that, you need to forgo GC
entirely and rely on smart pointers, automatic reference counting, value semantics, and
RAII. Reply ↓
esr on
2017-12-18 at
20:27:20 said: > For applications like that, you need to forgo GC entirely and
rely on smart pointers, automatic reference counting, value semantics, and RAII.
How many times do I have to repeat "reposurgeon would never have been written
under that constraint" before somebody who claims LISP experience gets it? Reply
↓
Jeff Read on 2017-12-18 at 20:48:24 said:
You mentioned that reposurgeon wouldn't have been written under the constraints of
C. But C++ is not C, and has an entirely different set of constraints. In practice,
it's not thst far off from Lisp, especially if you avail yourself of those
wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions"
for a reason .
Semantically, programming in a GC'd language and programming in a language that
uses smart pointers and RAII are very similar: you create the objects you need, and
they are automatically disposed of when no longer needed. But instead of delegating
to a GC which cleans them up whenever, both you and the compiler have compile-time
knowledge of when those cleanups will take place, allowing you finer-grained
control over how memory -- or any other resource -- is used.
Oh, that's another thing: GC only has something to say about memory --
not file handles, sockets, or any other resource. In C++, with appropriate types
value semantics can be made to apply to those too and they will immediately be
destructed after their last use. There is no special with construct in
C++; you simply construct the objects you need and they're destructed when they go
out of scope.
This is how the big boys do systems programming. Again, Go has barely
displaced C++ at all inside Google despite being intended for just that
purpose. Their entire critical path in search is still C++ code. And it always will
be until Rust gains traction.
As for my Lisp experience, I know enough to know that Lisp has utterly
failed and this is one of the major reasons why. It's not even a decent AI
language, because the scruffies won, AI is basically large-scale statistics, and
most practitioners these days use C++. Reply
↓
esr on 2017-12-18 at 20:54:08
said: >C++ is not C, and has an entirely different set of constraints. In
practice, it's not thst far off from Lisp,
Oh, bullshit. I think you're just trolling, now.
I've been a C++ programmer and know better than this.
But don't argue with me. Argue with Ken Thompson, who designed Go because
he knows better than this. Reply
↓
Anthony Williams on
2017-12-19 at 06:02:03
said: Modern C++ is a long way from C++ when it was first standardized in
1998. You should *never* be manually managing memory in modern C++. You
want a dynamically sized array? Use std::vector. You want an adhoc graph?
Use std::shared_ptr and std::weak_ptr.
Any code I see which uses new or delete, malloc or free will fail code
review.
Destructors and the RAII idiom mean that this covers *any* resource, not
just memory.
See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource
Reply ↓
esr on 2017-12-19 at
07:53:58 said: >Modern C++ is a long way from C++ when it was
first standardized in 1998.
That's correct. Modern C++ is a disaster area of compounded
complexity and fragile kludges piled on in a failed attempt to fix
leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at
least it was drastically simpler. Clue: complexification when you don't
even fix the problems is bad .
My experience dates from 2009 and included Boost – I was a
senior dev on Battle For Wesnoth. Don't try to tell me I don't know
what "modern C++" is like. Reply
↓
Anthony Williams on
2017-12-19 at
08:17:58 said: > My experience dates from 2009 and included
Boost – I was a senior dev on Battle For Wesnoth. Don't try
to tell me I don't know what "modern C++" is like.
C++ in 2009 with boost was C++ from 1998 with a few extra
libraries. I mean that quite literally -- the standard was
unchanged apart from minor fixes in 2003.
C++ has changed a lot since then. There have been 3 standards
issued, in 2011, 2014, and just now in 2017. Between them, there is
a huge list of changes to the language and the standard library,
and these are readily available -- both clang and gcc have kept
up-to-date with the changes, and even MSVC isn't far behind. Even
more changes are coming with C++20.
So, with all due respect, C++ from 2009 is not "modern C++",
though there certainly were parts of boost that were leaning that
way.
esr on 2017-12-19 at
08:37:11 said: >So, with all due respect, C++ from 2009
is not "modern C++", though there certainly were parts of boost
that were leaning that way.
But the foundational abstractions are still leaky. So when
you tell me "it's all better now", I don't believe you. I just
plain do not.
I've been hearing this soothing song ever since around 1989.
"Trust us, it's all fixed." Then I look at the "fixes" and
they're horrifying monstrosities like templates – all the
dangers of preprocessor macros and a whole new class of
Turing-complete nightmares, too! In thirty years I'm certain
I'll be hearing that C++2047 solves all the problems this
time for sure , and I won't believe a word of it then,
either.
Reply ↓
If you would elaborate on this, I would be grateful.
What are the problematic leaky abstractions you are
concerned about?
Reply ↓
esr on 2017-12-19
at 09:26:24 said: >If you would elaborate on
this, I would be grateful. What are the problematic
leaky abstractions you are concerned about?
Are array accesses bounds-checked? Don't yammer
about iterators; what happens if I say foo[3] and foo
is dimension 2? Never mind, I know the answer.
Are bare, untyped pointers still in the language?
Never mind, I know the answer.
Can I get a core dump from code that the compiler
has statically checked and contains no casts? Never
mind, I know the answer.
Yes, C has these problems too. But it doesn't
pretend not to, and in C I'm never afflicted by
masochistic cultists denying that they're
problems.
> Are array accesses bounds-checked? Don't yammer
about iterators; what happens if I say foo[3] and foo
is dimension 2? Never mind, I know the answer.
You are right, bare arrays are not bounds-checked,
but std::array provides an at() member function, so
arr.at(3) will throw if the array is too small.
Also, ranged-for loops can avoid the need for
explicit indexing lots of the time anyway.
> Are bare, untyped pointers still in the
language? Never mind, I know the answer.
Yes, void* is still in the language. You need to
cast it to use it, which is something that is easy to
spot in a code review.
> Can I get a core dump from code that the
compiler has statically checked and contains no casts?
Never mind, I know the answer.
Probably. Is it possible to write code in any
language that dies horribly in an unintended
fashion?
> Yes, C has these problems too. But it doesn't
pretend not to, and in C I'm never afflicted by
masochistic cultists denying that they're problems.
Did I say C++ was perfect? This blog post was about
the problems inherent in the lack of automatic memory
management in C and C++, and thus why you wouldn't have
written reposurgeon if that's all you had. My point is
that it is easy to write C++ in a way that doesn't
suffer from those problems.
esr on 2017-12-19
at 10:10:11 said: > My point is that it is easy
to write C++ in a way that doesn't suffer from those
problems.
No, it is not. The error statistics of large C++
programs refute you.
My personal experience on Battle for Wesnoth refutes
you.
The persistent self-deception I hear from C++
advocates on this score does nothing to endear the
language to me.
Ian Bruene on 2017-12-19
at 11:05:22 said: So what I am hearing in this is:
"Use these new standards built on top of the language,
and make sure every single one of your dependencies
holds to them just as religiously are you are. And if
anyone fails at any point in the chain you are
doomed.".
Cool.
Casey Barker on 2017-12-19
at 11:12:16 said: Using Go has been a revelation, so I
mostly agree with Eric here. My only objection is to
equating C++03/Boost with "modern" C++. I used both
heavily, and given a green field, I would consider C++14
for some of these thorny designs that I'd never have used
C++03/Boost for. It's a qualitatively different experience.
Just browse a copy of Scott Meyer's _Effective Modern C++_
for a few minutes, and I think you'll at least understand
why C++14 users object to the comparison. Modern C++
enables better designs.
Alas, C++ is a multi-layered tool chest. If you stick to
the top two shelves, you can build large-scale, complex
designs with pretty good safety and nigh unmatched
performance. Everything below the third shelf has rusty
tools with exposed wires and no blade guards, and on
large-scale projects, it's impossible to keep J. Random
Programmer from reaching for those tools.
So no, if they keep adding features, C++ 2047 won't
materially improve this situation. But there is a
contingent (including Meyers) pushing for the *removal* of
features. I think that's the only way C++ will stay
relevant in the long-term.
http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html
Reply ↓
Zygo on 2017-12-19
at 11:52:17 said: My personal experience is that C++11
code (in particular, code that uses closures, deleted
methods, auto (a feature you yourself recommended for C
with different syntax), and the automatic memory and
resource management classes) has fewer defects per
developer-year than the equivalent C++03-and-earlier code.
This is especially so if you turn on compiler flags that
disable the legacy features (e.g. -Werror=old-style-cast),
and treat any legacy C or C++03 code like foreign language
code that needs to be buried under a FFI to make it safe to
use.
Qualitatively, the defects that do occur are easier to
debug in C++11 vs C++03. There are fewer opportunities for
the compiler to interpolate in surprising ways because the
automatic rules are tighter, the library has better utility
classes that make overloads and premature optimization less
necessary, the core language has features that make
templates less necessary, and it's now possible to
explicitly select or rule out invalid candidates for
automatic code generation.
I can design in Lisp, but write C++11 without much
effort of mental translation. Contrast with C++03, where
people usually just write all the Lispy bits in some
completely separate language (or create shambling horrors
like Boost to try to bandaid over the missing limbs
boost::lambda, anyone?
Oh, look, since C++11 they've doubled down on something
called boost::phoenix).
Does C++11 solve all the problems? Absolutely not, that
would break compatibility. But C++11 is noticeably better
than its predecessors. I would say the defect rates are now
comparable to Perl with a bunch of custom C modules (i.e.
exact defect rate depends on how much you wrote in each
language).
Reply ↓
NHO on 2017-12-19
at 11:55:11 said: C++ happily turned into complexity
metatarpit with "Everything that could be implemented in STL
with templates should, instead of core language". And not
deprecating/removing features, instead leaving there.
Reply ↓
Michael on 2017-12-19 at
08:59:41 said: For the curious, can you point to a C++
tutorial/intro that shows how to do it the right way ?
Reply ↓
Michael on 2017-12-19
at 12:09:45 said: Thank you. Not sure this is what
I was looking for.
Was thinking more along the lines of "Learning
Python" equivalent.
Anthony Williams on
2017-12-19 at
08:26:13 said: > That's correct. Modern C++ is a disaster
area of compounded complexity and fragile kludges piled on in a
failed attempt to fix leaky abstractions. 1998 C++ had the
leaky-abstractions problem, but at least it was drastically
simpler. Clue: complexification when you don't even fix the
problems is bad.
I agree that there is a lot of complexity in C++. That doesn't
mean you have to use all of it. Yes, it makes maintaining legacy
code harder, because the older code might use dangerous or complex
parts, but for new code we can avoid the danger, and just stick to
the simple, safe parts.
The complexity isn't all bad, though. Part of the complexity
arises by providing the ability to express more complex things in
the language. This can then be used to provide something simple to
the user.
Take std::variant as an example. This is a new facility from
C++17 that provides a type-safe discriminated variant. If you have
a variant that could hold an int or a string and you store an int
in it, then attempting to access it as a string will cause an
exception rather than a silent error. The code that *implements*
std::variant is complex. The code that uses it is simple.
Reply
↓
Jeff Read on 2017-12-20 at 09:07:06
said: I won't argue with you. C++ is error-prone (albeit less so than C)
and horrid to work in. But for certain classes of algorithmically complex,
CPU- and RAM-intensive problems it is literally the only viable
choice. And it looks like performing surgery on GCC-scale repos falls into
that class of problem.
I'm not even saying it was a bad idea to initially write reposurgeon in
Python. Python and even Ruby are great languages to write prototypes or
even small-scale production versions of things because of how rapidly they
may be changed while you're hammering out the details. But scale comes
around to bite you in the ass sooner than most people think and when it
does, your choice of language hobbles you in a way that can't be
compensated for by throwing more silicon at the problem. And it's in that
niche where C++ and Rust dominate, absolutely uncontested. Reply
↓
jim on 2017-12-22 at 06:41:27
said: If you found rust hard going, you are not a C++ programmer who knows
better than this.
Anthony Williams on 2017-12-19 at
06:15:12 said: > How many times do I have to repeat "reposurgeon would never
have been
> written under that constraint" before somebody who claims LISP
> experience gets it?
That speaks to your lack of experience with modern C++, rather than an inherent
limitation. *You* might not have written reposurgeon under that constraint, because
*you* don't feel comfortable that you wouldn't have ended up with a black-hole of
AMM. That does not mean that others wouldn't have or couldn't have, and that their
code would necessarily be an unmaintainable black hole.
In well-written modern C++, memory management errors are a solved problem. You
can just write code, and know that the compiler and library will take care of
cleaning up for you, just like with a GC-based system, but with the added benefit
that it's deterministic, and can handle non-memory resources such as file handles
and sockets too. Reply
↓
esr on 2017-12-19 at 07:59:30
said: >In well-written modern C++, memory management errors are a solved
problem
In well-written assembler memory management errors are a solved
problem. I hate this idiotic cant repetition about how if you're just good
enough for the language it won't hurt you – it sweeps the actual
problem under the rug while pretending to virtue. Reply
↓
Anthony Williams on
2017-12-19 at 08:08:53
said: > I hate this idiotic repetition about how if you're just good
enough for the language it won't hurt you – it sweeps the actual
problem under the rug while pretending to virtue.
It's not about being "just good enough". It's about *not* using the
dangerous parts. If you never use manual memory management, then you can't
forget to free, for example, and automatic memory management is *easy* to
use. std::string is a darn sight easier to use than the C string functions,
for example, and std::vector is a darn sight easier to use than dynamic
arrays with new. In both cases, the runtime manages the memory, and it is
*easier* to use than the dangerous version.
Every language has "dangerous" features that allow you to cause
problems. Well-written programs in a given language don't use the dangerous
features when there are equivalent ones without the problems. The same is
true with C++.
The fact that historically there are areas where C++ didn't provide a
good solution, and thus there are programs that don't use the modern
solution, and experience the consequential problems is not an inherent
problem with the language, but it does make it harder to educate people.
Reply
↓
John D. Bell on 2017-12-19 at
10:48:09 said: > It's about *not* using the dangerous parts.
Every language has "dangerous" features that allow you to cause
problems. Well-written programs in a given language don't use the
dangerous features when there are equivalent ones without the problems.
Why not use a language that doesn't have "'dangerous'
features"?
NOTES: [1] I am not saying that Go is necessarily that language
– I am not even saying that any existing language is
necessarily that language.
[2] /me is being someplace between naive and trolling here. Reply
↓
esr on 2017-12-19 at
11:10:15 said: >Why not use a language that doesn't have
"'dangerous' features"?
Historically, it was because hardware was weak and expensive
– you couldn't afford the overhead imposed by those
languages. Now it's because the culture of software engineering has
bad habits formed in those days and reflexively flinches from using
higher-overhead safe languages, though it should not. Reply
↓
Paul R on 2017-12-19 at
12:30:42 said: Runtime efficiency still matters. That and the
ability to innovate are the reasons I think C++ is in such wide
use.
To be provocative, I think there are two types of programmer,
the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s
and think 'Wow, I want to find out more!' and the rest. The rest
can have Go and Rust
D of course is the baby elephant in the room, worth much more
attention than it gets. Reply
↓
Michael on 2017-12-19 at
12:53:33 said: Runtime efficiency still matters. That
and the ability to innovate are the reasons I think C++ is in
such wide use.
Because you can't get runtime efficiency in any other
language?
Because you can't innovate in any other language?
Reply ↓
Our three main advantages, runtime efficiency,
innovation opportunity, building on a base of millions of
lines of code that run the internet and an international
standard.
Our four main advantages
More seriously, C++ enabled the STL, the STL transforms
the approach of its users, with much increased reliability
and readability, but no loss of performance. And at the
same time your old code still runs. Now that is old stuff,
and STL2 is on the way. Evolution.
Zygo on 2017-12-19
at 14:14:42 said: > Because you can't innovate in
any other language?
That claim sounded odd to me too. C++ looks like the
place that well-proven features of younger languages go to
die and become fossilized. The standardization process
would seem to require it.
Reply ↓
My thought was the language is flexible enough to
enable new stuff, and has sufficient weight behind it
to get that new stuff actually used.
Generic programming being a prime example.
Michael on 2017-12-20
at 08:19:41 said: My thought was the language is
flexible enough to enable new stuff, and has sufficient
weight behind it to get that new stuff actually
used.
Are you sure it's that, or is it more the fact that
the standards committee has forever had a me-too
kitchen-sink no-feature-left-behind obsession?
(Makes me wonder if it doesn't share some DNA with
the featuritis that has been Microsoft's calling card
for so long. – they grew up together.)
Paul R on 2017-12-20
at 11:13:20 said: No, because people come to the
standards committee with ideas, and you cannot have too
many libraries. You don't pay for what you don't use.
Prime directive C++.
Michael on 2017-12-20
at 11:35:06 said: and you cannot have too many
libraries. You don't pay for what you don't use.
And this, I suspect, is the primary weakness in your
perspective.
Is the defect rate of C++ code better or worse
because of that?
Paul R on 2017-12-20
at 15:49:29 said: The rate is obviously lower
because I've written less code and library code only
survives if it is sound. Are you suggesting that
reusing code is a bad idea? Or that an indeterminate
number of reimplementations of the same functionality
is a good thing?
You're not on the most productive path to effective
criticism of C++ here.
Michael on 2017-12-20
at 17:40:45 said: The rate is obviously lower
because I've written less code
Please reconsider that statement in light of how
defect rates are measured.
Are you suggesting..
Arguing strawmen and words you put in someone's
mouth is not the most productive path to effective
defense of C++.
But thank you for the discussion.
Paul R on 2017-12-20
at 18:46:53 said: This column is too narrow to have
a decent discussion. WordPress should rewrite in C++ or
I should dig out my Latin dictionary.
Seriously, extending the reach of libraries that
become standardised is hard to criticise, extending the
reach of the core language is.
It used to be a thing that C didn't have built in
functionality for I/O (for example) rather it was
supplied by libraries written in C interfacing to a
lower level system interface. This principle seems to
have been thrown out of the window for Go and the
others. I'm not sure that's a long term win. YMMV.
But use what you like or what your cannot talk your
employer out of using, or what you can get a job using.
As long as it's not Rust.
Zygo on 2017-12-19 at
12:24:25 said: > Well-written programs in a given language don't
use the dangerous features
Some languages have dangerous features that are disabled by default
and must be explicitly enabled prior to use. C++ should become one of
those languages.
I am very fond of the 'override' keyword in C++11, which allows me
to say "I think this virtual method overrides something, and don't
compile the code if I'm wrong about that." Making that assertion
incorrectly was a huge source of C++ errors for me back in the days
when I still used C++ virtual methods instead of lambdas. C++11 solved
that problem two completely different ways: one informs me when I make
a mistake, and the other makes it impossible to be wrong.
Arguably, one should be able to annotate any C++ block and say
"there shall be no manipulation of bare pointers here" or "all array
access shall be bounds-checked here" or even " and that's the default
for the entire compilation unit." GCC can already emit warnings for
these without human help in some cases. Reply
↓
1. Circular references. C++ has smart pointer classes that work when your data
structures are acyclic, but it doesn't have a good solution for circular
references. I'm guessing that reposurgeon's graphs are almost never DAGs.
2. Subversion of AMM. Bare news and deletes are still available, so some later
maintenance programmer could still introduce memory leaks. You could forbid the use
of bare new and delete in your project, and write a check-in hook to look for
violations of the policy, but that's one more complication to worry about and it
would be difficult to impossible to implement reliably due to macros and the
generally difficulty of parsing C++.
3. Memory corruption. It's too easy to overrun the end of arrays, treat a
pointer to a single object as an array pointer, or otherwise corrupt memory.
Reply
↓
esr on 2017-12-20 at 15:51:55
said: >Is this a good summary of your objections to C++ smart pointers as a
solution to AMM?
That is at least a large subset of my objections, and probably the most
important ones. Reply
↓
jim on 2017-12-22 at 07:15:20
said: It is uncommon to find a cyclic graph that cannot be rendered acyclic
by weak pointers.
C++17 cheerfully breaks backward compatibility by removing some
dangerous idioms, refusing to compile code that should never have been
written. Reply
↓
guest on 2017-12-20 at 19:12:01
said: > Circular references. C++ has smart pointer classes that work when
your data structures are acyclic, but it doesn't have a good solution for
circular references. I'm guessing that reposurgeon's graphs are almost never
DAGs.
General graphs with possibly-cyclical references are precisely the workload
GC was created to deal with optimally, so ESR is right in a sense that
reposturgeon _requires_ a GC-capable language to work. In most other programs,
you'd still want to make sure that the extent of the resources that are under
GC-control is properly contained (which a Rust-like language would help a lot
with) but it's possible that even this is not quite worthwhile for
reposturgeon. Still, I'd want to make sure that my program is optimized in
_other_ possible ways, especially wrt. using memory bandwidth efficiently
– and Go looks like it doesn't really allow that. Reply
↓
esr on 2017-12-20 at 20:12:49
said: >Still, I'd want to make sure that my program is optimized in
_other_ possible ways, especially wrt. using memory bandwidth efficiently
– and Go looks like it doesn't really allow that.
Er, there's any language that does allow it? Reply
↓
Jeff Read on 2017-12-27 at
20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the
only language taken seriously by game developers. Reply
↓
Zygo on 2017-12-21 at 12:56:20
said: > I'm guessing that reposurgeon's graphs are almost never DAGs
Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up
with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly
the only case I've really looked deeply at)?
Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs
(assuming no significant advances in SHA1 cracking and no grafts–and even
then, all you have to do is detect the cycle and error out). I don't know how a
generic revision history data structure could contain a cycle anywhere even if
I wanted to force one in somehow. Reply
↓
The repo graph is, but a lot of the structures have reference loops for
fast lookup. For example, a blob instance has a pointer back to the
containing repo, as well as being part of the repo through a pointer chain
that goes from the repo object to a list of commits to a blob.
Without those loops, navigation in the repo structure would get very
expensive. Reply
↓
guest on 2017-12-21 at
15:22:32 said: Aren't these inherently "weak" pointers though? In
that they don't imply ownership/live data, whereas the "true" DAG
references do? In that case, and assuming you can be sufficiently sure
that only DAGs will happen, refcounting (ideally using something like
Rust) would very likely be the most efficient choice. No need for a
fully-general GC here. Reply
↓
esr on 2017-12-21 at
15:34:40 said: >Aren't these inherently "weak" pointers
though? In that they don't imply ownership/live data
I think they do. Unless you're using "ownership" in some sense I
don't understand. Reply
↓
jim on 2017-12-22 at
07:31:39 said: A weak pointer does not own the object it
points to. A shared pointer does.
When there are are zero shared pointers pointing to an
object, it gets freed, regardless of how many weak pointers are
pointing to it.
Shared pointers and unique pointers own, weak pointers do
not own.
Reply ↓
jim on 2017-12-22 at
07:23:35 said: In C++11, one would implement a pointer back to the
owning object as a weak pointer. Reply
↓
> How many times do I have to repeat "reposurgeon would never have been
written under that constraint" before somebody who claims LISP experience gets
it?
Maybe it is true, but since you do not understand, or particularly wish to
understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak
pointers, we hear you say that you would never write reposurgeon would never
under that constraint.
Which, since no one else is writing reposurgeon, is an argument, but not an
argument that those who do get weak pointers and rust scopes find all that
convincing.
I am inclined to think that those who write C++98 (which is the gcc default)
could not write reposurgeon under that constraint, but those who write C++11 could
write reposurgeon under that constraint, and except for some rather unintelligible,
complicated, and twisted class constructors invoking and enforcing the C++11
automatic memory management system, it would look very similar to your existing
python code. Reply
↓
esr on 2017-12-23 at 02:49:13
said: >since you do not understand, or particularly wish to understand, Rust
scoping, ownership, and zero cost abstractions, or C++ weak pointers
Thank you, I understand those concepts quite well. I simply prefer to apply
them in languages not made of barbed wire and landmines. Reply
↓
guest on 2017-12-23 at 07:11:48
said: I'm sure that you understand the _gist_ of all of these notions quite
accurately, and this alone is of course quite impressive for any developer
– but this is not quite the same as being comprehensively aware of
their subtler implications. For instance, both James and I have suggested
to you that backpointers implemented as an optimization of an overall DAG
structure should be considered "weak" pointers, which can work well
alongside reference counting.
For that matter, I'm sure that Rustlang developers share your aversion
to "barbed wire and landmines" in a programming language. You've criticized
Rust before (not without some justification!) for having half-baked
async-IO facilities, but I would think that reposturgeon does not depend
significantly on async-IO. Reply
↓
esr on 2017-12-23 at
08:14:25 said: >For instance, both James and I have suggested to
you that backpointers implemented as an optimization of an overall DAG
structure should be considered "weak" pointers, which can work well
alongside reference counting.
Yes, I got that between the time I wrote my first reply and JAD
brought it up. I've used Python weakrefs in similar situations. I would
have seemed less dense if I'd had more sleep at the time.
>For that matter, I'm sure that Rustlang developers share your
aversion to "barbed wire and landmines" in a programming language.
That acidulousness was mainly aimed at C++. Rust, if it implements
its theory correctly (a point on which I am willing to be optimistic)
doesn't have C++'s fatal structural flaws. It has problems of its own
which I won't rehash as I've already anatomized them in detail.
Reply
↓
Garrett on 2017-12-21 at 11:16:25 said:
There's also development cost. I suspect that using eg. Python drastically reduces the
cost for developing the code. And since most repositories are small enough that Eric
hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty
obvious that execution time just plainly doesn't matter. Migration is going to involve
a temporary interruption to service and is going to be performed roughly once per repo.
The amount of time involved in just stopping the eg. SVN service and bringing up the
eg. GIT hosting service is likely to be longer than the conversion time for the median
conversion operation.
So in these cases, most users don't care about the run-time, and outside of a
handful of examples, wouldn't brush up against the CPU or memory limitations of a
whitebox PC.
This is in contrast to some other cases in which I've worked such as file-serving
(where latency is measured in microseconds and is actually counted), or large data
processing (where wasting resources reduces the total amount of stuff everybody can
do). Reply ↓
David Collier-Brown on
2017-12-18 at
20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is
really the OS equivalent of the automatic memory management of application programs? One
works in pages, admittedly, not bytes or groups of bytes, but one could argue that the
sub-page stuff is just expensive anti-internal-fragmentation plumbing
–dave
[In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in
the US. And yes, I that's also the redneck's famous last words.] Reply ↓
John Moore on 2017-12-18 at 22:20:21 said: In my
experience, with most of my C systems programming in protocol stacks and transaction
processing infrastructure, the MM problem has been one of code, not data structure
complexity. The memory is often allocated by code which first encounters the need, and it
is then passed on through layers and at some point, encounters code which determines the
memory is no longer needed. All of this creates an implicit contract that he who is handed
a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there
may be many places where that is needed – most of them in exception handling.
That creates many, many opportunities for some to simply forget to release it. Also,
when the code is handed off to someone unfamiliar, they may not even know about the
contract. Crises (or bad habits) lead to failures to document this stuff (or create
variable names or clear conventions that suggest one should look for the contract).
I've also done a bunch of stuff in Java, both applications level (such as a very complex
Android app with concurrency) and some infrastructural stuff that wasn't as performance
constrained. Of course, none of this was hard real-time although it usually at least needed
to provide response within human limits, which GC sometimes caused trouble with. But, the
GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it
simplified things.
On the side, I write hard real time stuff on tiny, RAM constrained embedded systems
– PIC18F series stuff (with the most horrible machine model imaginable for such a
simple little beast). In that world, there is no malloc used, and shouldn't be. It's
compile time created buffers and structures for the most part. Fortunately, the
applications don't require advanced dynamic structures (like symbol tables) where you need
memory allocation. In that world, AMM isn't an issue. Reply ↓
Michael on 2017-12-18 at 22:47:26 said:
PIC18F series stuff (with the most horrible machine model imaginable for such a simple
little beast)
LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F –
after I found out what it took to do a simple table lookup, I was ready for a stiff
drink. Reply ↓
esr on
2017-12-18 at
23:45:03 said: >In my experience, with most of my C systems programming in
protocol stacks and transaction processing infrastructure, the MM problem has been one
of code, not data structure complexity.
I believe you. I think I gravitate to problems with data-structure complexity
because, well, that's just the way my brain works.
But it's also true that I have never forgotten one of the earliest lessons I learned
from Lisp. When you can turn code complexity into data structure complexity, that's
usually a win. Or to put it slightly differently, dumb code munching smart data beats
smart code munching dumb data. It's easier to debug and reason about. Reply
↓
Jeremy on 2017-12-19 at 01:36:47 said:
Perhaps its because my coding experience has mostly been short python scripts of
varying degrees of quick-and-dirtiness, but I'm having trouble grokking the
difference between smart code/dumb data vs dumb code/smart data. How does one tell
the difference?
Now, as I type this, my intuition says it's more than just the scary mess of
nested if statements being in the class definition for your data types, as opposed
to the function definitions which munch on those data types; a scary mess of nested
if statements is probably the former. The latter though I'm coming up blank.
Perhaps a better question than my one above: what codebases would you recommend
for study which would be good examples of the latter (besides reposurgeon)?
Reply
↓
jsn on 2017-12-19 at 02:35:48
said: I've always expressed it as "smart data + dumb logic = win".
You almost said my favorite canned example: a big conditional block vs. a
lookup table. The LUT can replace all the conditional logic with structured
data and shorter (simpler, less bug-prone, faster, easier to read)
unconditional logic that merely does the lookup. Concretely in Python, imagine
a long list of "if this, assign that" replaced by a lookup into a dictionary.
It's still all "code", but the amount of program logic is reduced.
So I would answer your first question by saying look for places where data
structures are used. Then guesstimate how complex some logic would have to be
to replace that data. If that complexity would outstrip that of the data
itself, then you have a "smart data" situation. Reply
↓
Emanuel Rylke on 2017-12-19 at 04:07:58
said: To expand on this, it can even be worth to use complex code to generate
that dumb lookup table. This is so because the code generating the lookup
table runs before, and therefore separately, from the code using the LUT.
This means that both can be considered in isolation more often; bringing the
combined complexity closer to m+n than m*n. Reply
↓
TheDividualist on 2017-12-19 at
05:39:39 said: Admittedly I have an SQL hammer and think everything is
a nail, but why not would *every* program include a database, like the
SQLLite that even comes bundled with Python distros, no sweat, and put that
lookup table into it, not in a dictionary inside the code?
Of course the more you go in this direction the more problems you will
have with unit testing, in case you want to do such a thing. Generally we
SQL-hammer guys don't do that much, because in theory any fuction can read
any part of the database, making the whole database the potential "inputs"
for every function.
That is pretty lousy design, but I think good design patterns for
separations of concerns and unit testability are not yet really known for
database driven software, I mean, for example, model-view-controller claims
to be one, but actually fails as these can and should call each other. So
you have in the "customer" model or controller a function to check if the
customer has unpaid invoices, and decide to call it from the "sales order"
controller or model to ensure such customers get no new orders registered.
In the same "sales order" controller you also check the "product" model or
controller if it is not a discontinued product and check the "user" model
or controller if they have the proper rights for this operation and the
"state" controller if you are even offering this product in that state and
so on a gazillion other things, so if you wanted to automatically unit test
that "register a new sales order" function you have a potential "input"
space of half the database. And all that with good separation of concerns
MVC patterns. So I think no one really figured this out yet? Reply
↓
guest on 2017-12-20 at 19:21:13
said: There's a reason not to do this if you can help it – dispatching
through a non-constant LUT is way slower than running easily-predicted
conditionals. Like, an order of magnitude slower, or even worse. Reply
↓
esr on 2017-12-19 at 07:45:38
said: >Perhaps a better question than my one above: what codebases would you
recommend for study which would be good examples of the latter (besides
reposurgeon)?
I do not have an instant answer, sorry. I'll hand that question to my
backbrain and hope an answer pops up. Reply
↓
Jon Brase on 2017-12-20 at 00:54:15 said:
When you can turn code complexity into data structure complexity, that's usually
a win. Or to put it slightly differently, dumb code munching smart data beats smart
code munching dumb data. It's easier to debug and reason about.
Doesn't "dumb code munching smart data" really reduce to "dumb code implementing
a virtual machine that runs a different sort of dumb code to munch dumb data"?
Reply
↓
A domain specific language is easier to reason about within its proper
domain, because it lowers the difference between the problem and the
representation of the problem. Reply
↓
wisd0me on 2017-12-19 at 02:35:10 said: I wonder
why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is
available for C language. Why do you need to invent some AMM-layer in the first place,
instead of just using the GC?
For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many
others. Reply ↓
Jeremy Bowers on
2017-12-19 at
10:40:24 said: Rust seems like a good fit for the cases where you need the low latency
(and other speed considerations) and can't afford the automation. Firefox finally got to
benefit from that in the Quantum release, and there's more coming. I wouldn't dream of
writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to
spend on that sort of quality, Rust is a good tool to get there.
But the very characteristics necessary to be good in that space will prevent it from
becoming the "default language" the way C was for so long. As much fun as it would be to
fantasize about teaching Rust as a first language, I think that's crazy talk for anything
but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the
level of student it would take for it to even be possible .) Dunno if Go will become
that "default language" but it's taking a decent run at it; most of the other contenders I
can think of at the moment have the short-term-strength-yet-long-term-weakness of being
tied to a strong platform already. (I keep hearing about how Swift is going to be usable
off of Apple platforms real soon now just around the corner just a bit longer .) Reply
↓
esr on
2017-12-19 at
17:30:07 said: >Dunno if Go will become that "default language" but it's taking
a decent run at it; most of the other contenders I can think of at the moment have the
short-term-strength-yet-long-term-weakness of being tied to a strong platform already.
I really think the significance of Go being an easy step up from C cannot be
overestimated – see my previous blogging about the role of inward transition
costs.
Ken Thompson is insidiously clever. I like channels and subroutines and := but the
really consequential hack in Go's design is the way it is almost perfectly designed to
co-opt people like me – that is, experienced C programmers who have figured out
that ad-hoc AMM is a disaster area. Reply ↓
Jeff Read on 2017-12-20 at 08:58:23 said:
Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it
does to Thompson -- because it's basically Alef with the feature whose lack,
according to Pike, basically killed Alef: garbage collection.
I don't know that it's "insidiously clever" to add concurrency primitives and GC
to a C-like language, as concurrency and memory management were the two obvious
banes of every C programmer's existence back in the 90s -- so if Go is "insidiously
clever", so is Java. IMHO it's just smart, savvy design which is no small thing;
languages are really hard to get right. And in the space Go thrives in, Go gets a
lot right. Reply
↓
John G on 2017-12-19 at 14:01:09 said: Eric,
have you looked into D *lately*? These days:
First, `pure` functions and transitive `const`, which make code so much
easier to reason about
Second. Allmost entire language is available at compile time. That, combined
with templates, enables crazy (in a good way) stuff, like building optimized
state machine for regex at compile-time. Given, regex pattern is known at
compile time, of course. But that's pretty common.
Can't find it now, but there were bechmarks, which show it's faster than any
run-time built regex engine out there. Still, source code is pretty
straightforward – one don't have to be Einstein to write code like that
[1].
There is a talk by Andrei Alexandrescu called "Fastware" were he show how
various metaprogramming facilities enable useful optimizations [2].
And a more recent talk, "Design By Introspection" [3], were he shows how these
facilities enable much more compact designs and implementaions.
Not sure. I've only recently begun learning D, and I don't know Go. [The D
overview]( https://dlang.org/overview.html ) may include
enough for you to surmise the differences though. Reply
↓
As the greenspunity rises, you are likely to find that more and more of your effort
and defect chasing is related to the AMM layer, and proportionally less goes to the
application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated
by issues like double-free errors and malloc leaks. This is commonly the case in C/C++
programs of even low greenspunity.
Interesting. This certainly fits my experience.
Has anybody looked for common patterns in whatever parasitic distractions plague you
when you start to reach the limits of a language with AMM? Reply ↓
result, err = whatever()
if (err) dosomethingtofixit();
abstraction.
I went through a phase earlier this year where I tried to eliminate the concept of
an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I
still think – to the tune of flight of the valkeries – "Kill the errno,
kill the errno, kill the ERRno, kill the err!' Reply ↓
jim on 2017-12-23 at
23:37:46 said: I have on several occasions been part of big projects using
languages with AMM, many programmers, much code, and they hit scaling problems and
died, but it is not altogether easy to explain what the problem was.
But it was very clear that the fact that I could get a short program, or a quick fix
up and running with an AMM much faster than in C or C++ was failing to translate into
getting a very large program containing far too many quick fixes up and running.
Reply ↓
AMM is not the only thing that Lisp brings on the table when it comes to dealing with
Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual
barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and
that is why you need be able to extend your language through _syntactic_abstraction_ to
build DSLs so that every abstraction layer can be written in a language that is fit that
that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on
_restriction_ as well as _extension_; but Haskell types and Racket languages show the way
forward in this respect.]
That is why all languages without macros, even with AMM, remain "blub" to those who grok
Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope
with the otherwise very low abstraction ceiling.
(Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also
has some support for macros.) Reply ↓
jim on
2017-12-23
at 22:02:18 said: Static AMM means that the compiler analyzes your code at
compile time, and generates the appropriate frees,
Static AMM means that the compiler automatically does what you manually do in C,
and semi automatically do in C++11 Reply
↓
Patrick Maupin on 2017-12-24 at 13:36:35
said: To the extent that the compiler's insertion of calls to free() can be
easily deduced from the code without special syntax, the insertion is merely an
optimization of the sort of standard AMM semantics that, for example, a PyPy
compiler could do.
To the extent that the compiler's ability to insert calls to free() requires
the sort of special syntax about borrowing that means that the programmer has
explicitly described a non-stack-based scope for the variable, the memory
management isn't automatic.
Perhaps this is why a google search for "static AMM" doesn't return much.
Reply
↓
Jeff Read on 2017-12-27 at 03:01:19
said: I think you fundamentally misunderstand how borrowing works in Rust.
In Rust, as in C++ or even C, references have value semantics. That is
to say any copies of a given reference are considered to be "the same". You
don't have to "explicitly describe a non-stack-based scope for the
variable", but the hitch is that there can be one, and only one, copy of
the original reference to a variable in use at any time. In Rust this is
called ownership, and only the owner of an object may mutate it.
Where borrowing comes in is that functions called by the owner of an
object may borrow a reference to it. Borrowed references are
read-only, and may not outlast the scope of the function that does the
borrowing. So everything is still scope-based. This provides a convenient
way to write functions in such a way that they don't have to worry about
where the values they operate on come from or unwrap any special types,
etc.
If you want the scope of a reference to outlast the function that
created it, the way to do that is to use a std::Rc , which
provides a regular, reference-counted pointer to a heap-allocated object,
the same as Python.
The borrow checker checks all of these invariants for you and will flag
an error if they are violated. Since worrying about object lifetimes is
work you have to do anyway lest you pay a steep price in performance
degradation or resource leakage, you win because the borrow checker makes
this job much easier.
Rust does have explicit object lifetimes, but where these are most
useful is to solve the problem of how to have structures, functions, and
methods that contain/return values of limited lifetime. For example
declaring a struct Foo { x: &'a i32 } means that any
instance of struct Foo is valid only as long as the borrowed
reference inside it is valid. The borrow checker will complain if you
attempt to use such a struct outside the lifetime of the internal
reference. Reply
↓
Doctor Locketopus on 2017-12-27 at 00:16:54 said:
Good Lord (not to be confused with Audre Lorde). If I weren't already convinced
that Rust is a cult, that would do it.
However, I must confess to some amusement about Karl Marx and Michel Foucault
getting purged (presumably because Dead White Male). Reply
↓
Jeff Read on 2017-12-27 at 02:06:40 said:
This is just a cost of doing business. Hacker culture has, for decades, tried to
claim it was inclusive and nonjudgemental and yada yada -- "it doesn't
matter if you're a brain in a jar or a superintelligent dolphin as long as your
code is good" -- but when it comes to actually putting its money where its mouth
is, hacker culture has fallen far short. Now that's changing, and one of the side
effects of that is how we use language and communicate internally, and to the wider
community, has to change.
But none of this has to do with automatic memory management. In Rust, management
of memory is not only fully automatic, it's "have your cake and eat it too": you
have to worry about neither releasing memory at the appropriate time, nor the
severe performance costs and lack of determinism inherent in tracing GCs. You do
have to be more careful in how you access the objects you've created, but the
compiler will assist you with that. Think of the borrow checker as your friend, not
an adversary. Reply
↓
John on 2017-12-20 at 05:03:22 said: Present
day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be
physically overseeing memory in present day C++. You need a powerfully measured cluster?
Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.
Any code I see which utilizes new or erase, malloc or through and through freedom fall
flat code audit. Reply ↓
Garrett on 2017-12-21 at 11:24:41 said: What
makes you refer to this as a systems programming project? It seems to me to be a standard
data-processing problem. Data in, data out. Sure, it's hella complicated and you're
brushing up against several different constraints.
In contrast to what I think of as systems programming, you have automatic memory
management. You aren't working in kernel-space. You aren't modifying the core libraries or
doing significant programmatic interface design.
I'm missing something in your semantic usage and my understanding of the solution
implementation. Reply ↓
Never user-facing. Often scripted. Development-support tool. Used by systems
programmers.
I realize we're in an area where the "systems" vs. "application" distinction gets a
little tricky to make. I hang out in that border zone a lot and have thought about
this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no
end-user will ever touch them. Is GCC an application? Is apache or named?
Inside kernel is clearly systems. Outside it, I think the "systems" vs.
"application" distinction is about the skillset being applied and who your expected
users are than anything else.
I would not be upset at anyone who argued for a different distinction. I think
you'll find the definitional questions start to get awfully slippery when you poke at
them. Reply ↓
What makes you refer to this as a systems programming project? It seems to me to
be a standard data-processing problem. Data in, data out. Sure, it's hella
complicated and you're brushing up against several different constraints.
When you're talking about Unix, there is often considerable overlap between
"systems" and "application" programming because the architecture of Unix, with pipes,
input and output redirection, etc., allowed for essential OS components to be turned
into simple, data-in-data-out user-space tools. The functionality of ls ,
cp , rm , or cat , for instance, would have been
built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great
innovations of Unix is to turn these units of functionality into standalone programs,
and then make spawning processes cheap enough to where using them interactively from
the shell is easy and natural. This makes extending the system, as accessed through the
shell, easy: just write a new, small program and add it to your PATH .
So yeah, when you're working in an environment like Unix, there's no bright-line
distinction between "systems" and "application" code, just like there's no bright-line
distinction between "user" and "developer". Unix is a tool for facilitating humans
working with computers. It cannot afford to discriminate, lest it lose its Unix-nature.
(This is why Linux on the desktop will never be a thing, not without considerable decay
in the facets of Linux that made it so great to begin with.) Reply ↓
At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run
OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer,
significantly increased its storage footprint from older firmware versions), which runs
fine in that memory footprint, at least for the kinds of things you're likely to do on this
type of device. Reply ↓
I'd be comfortable in that environment, but if we're talking AMM languages Go would
probably be a better match for it. Reply ↓
Peter
Donis on 2017-12-21 at 23:16:33 said:
Go is not available as a standard package on OpenWRT, but it probably won't be too
much longer before it is. Reply ↓
Jeff Read on 2017-12-22 at 14:07:21
said: Go binaries are statically linked, so the best approach is probably to
install Go on your big PC, cross compile, and push the binary out to the
device. Cross-compiling is a doddle; simply set GOOS and GOARCH. Reply
↓
jim on 2017-12-22 at 06:37:36 said:
C++11 has an excellent automatic memory management layer. Its only defect is that it is
optional, for backwards compatibility with C and C++98 (though it really is not all that
compatible with C++98)
And, being optional, you are apt to take the short cut of not using it, which will bite
you.
Rust is, more or less, C++17 with the automatic memory management layer being almost
mandatory. Reply ↓
> you are likely to find that more and more of your effort and defect chasing is
related to the AMM layer
But the AMM layer for C++ has already been written and debugged, and standards and
idioms exist for integrating it into your classes and type definitions.
Once built into your classes, you are then free to write code as if in a fully garbage
collected language in which all types act like ints.
C++14, used correctly, is a metalanguage for writing domain specific languages.
Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for
reasons that are difficult to explain, but done correctly all the weird stuff is done once
in a small number of places, not spread all over your code Reply ↓
Dave taht on 2017-12-22 at 22:31:40 said: Linux is
the best C library ever created. And it's often, terrifying. Things like RCU are nearly
impossible for mortals to understand. Reply ↓
Alex Beamish on 2017-12-23 at 11:18:48 said:
Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and
not memory management, that persuaded me to move from C to Perl about twenty years ago.
Once I'd moved, I also appreciated the memory management in the shape of 'any size you
want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot
creation of array or hash elements, at any depth.
While C is a low-level language that masquerades as a high-level language, the original
intent of the language was to make writing assembler easier and faster. It can still be
used for that, when necessary, leaving the more complicated matters to higher level
languages. Reply ↓
esr on
2017-12-23 at
14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness'
surrounding file operations, and not memory management, that persuaded me to move from
C to Perl about twenty years ago.
Prestty much all that goodness depends on AMM and could not be implemented without
it. Reply ↓
jim on 2017-12-23 at
22:17:39 said: Autovivification saves you much effort, thought, and coding, because
most of the time the perl interpreter correctly divines your intention, and does a pile
of stuff for you, without you needing to think about it.
And then it turns around and bites you because it does things for you that you did
not intend or expect.
The larger the program, and the longer you are keeping the program around, the more
it is a problem. If you are writing a quick one off script to solve some specific
problem, you are the only person who is going to use the script, and are then going to
throw the script away, fine. If you are writing a big program that will be used by lots
of people for a long time, autovivification, is going to turn around and bit you hard,
as are lots of similar perl features where perl makes life easy for you by doing stuff
automagically.
With the result that there are in practice very few big perl programs used by lots
of people for a long time, while there are an immense number of very big C and C++
programs used by lots of people for a very long time.
On esr's argument, we should never be writing big programs in C any more, and yet,
we are.
I have been part of big projects with many engineers using languages with automatic
memory management. I noticed I could get something up and running in a fraction of the
time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully
completed. We found ourselves fighting weird shit done by the vast pile of run time
software that was invisibly under the hood automatically doing stuff for us. We would
be fighting mysterious and arcane installation and integration issues.
This, my personal experience, is the exact opposite of the outcome claimed by
esr.
Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.
But perl, Microsoft visual basic, and PHP did not scale. Reply ↓
Oh, dear Goddess, no wonder. All three of those languages are notorious
sinkholes – they're where "maintainability" goes to die a horrible and
lingering death.
Now I understand your fondness for C++ better. It's bad, but those are way worse
at any large scale. AMM isn't enough to keep you out of trouble if the rest of the
language is a tar-pit. Those three are full of the bones of drowned devops
victims.
Yes, Java scales better. CPython would too from a pure maintainability
standpoint, but it's too slow for the kind of deployment you're implying – on
the other hand, PyPy might not be, I'm finding the JIT compilation works extremely
well and I get runtimes I think are within 2x or 3x of C. Go would probably be da
bomb.Reply
↓
Oh, dear Goddess, no wonder. All three of those languages are notorious
sinkholes – they're where "maintainability" goes to die a horrible and
lingering death.
Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool.
It's full of little gotchas -- such as non-short-circuiting AND and OR
operators (there are no differentiated bitwise/logical operators) and the
cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's
directory-walking system call -- that betray its origins as an extended version
of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs
on TRS-80s and Commodores from a bygone era), and prevent you from
writing programs in a way that feels natural and correct if you've been exposed
to nearly anything else.
VB is a language optimized to a particular workflow -- and like many
languages so optimized as long as you color within the lines provided by the
vendor you're fine, but it's a minefield when you need to step outside those
lines (which happens sooner than you may think). And that's the case with just
about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP
belong in this category too.
It's no wonder the cuddly new Microsoft under Nadella is considering making
Python a first-class extension language for Excel (and perhaps other
Office apps as well).
Visual Basic .NET is something quite different -- a sort of
Microsoft-flavored Object Pascal, really. But I don't know of too many shops
actually using it; if you're targeting the .NET runtime it makes just as much
sense to just use C#.
As for Perl, it's possible to write large, readable, maintainable
code bases in object-oriented Perl. I've seen it done. BUT -- you have to be
careful. You have to establish coding standards, and if you come across the
stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk
it at code review and never let it touch prod. (Do modern developers even know
what line noise is, or where it comes from?) You also have to choose your
libraries carefully, ensuring they follow a sane semantics that doesn't require
weirdness in your code. I'd much rather just do it in Python. Reply
↓
TheDividualist on 2017-12-27 at
11:24:59 said: VB.NET is unusued in the kind of circles *you know*
because these are competitive and status-conscious circles and anything
with BASIC in the name is so obviously low-status and just looks so bad on
the resume that it makes sense to add that 10-20% more effort and learn C#.
C# sounds a whole lot more high status, as it has C in the name so obvious
it looks like being a Real Programmer on the resume.
What you don't know is what happens outside the circles where
professional programmers compete for status and jobs.
I can report that there are many "IT guys" who are not in these circles,
they don't have the intra-programmer social life hence no status concerns,
nor do they ever intend apply for Real Programmer jobs. They are just rural
or not first worlder guys who grew up liking computers, and took a generic
"IT guy" job at some business in a small town and there they taught
themselves Excel VBscript when the need arised to automate some reports,
and then VB.NET when it was time to try to build some actual application
for in-house use. They like it because it looks less intimidating –
it sends out those "not only meant for Real Programmers" vibes.
I wish we lived in a world where Python would fill that non-intimidating
amateur-friendly niche, as it could do that job very well, but we are
already on a hell of a path dependence. Seriously, Bill Gates and Joel
Spolsky got it seriously right when they made Excel scriptable. The trick
is how to provide a smooth transition between non-programming and
programming.
One classic way is that you are a sysadmin, you use the shell, then you
automate tasks with shell scripts, then you graduate to Perl.
One, relatively new way is that you are a web designer, write HTML and
CSS, and then slowly you get dragged, kicking and screaming into JavaScript
and PHP.
The genius was that they realized that a spreadsheet is basically modern
paper. It is the most basic and universal tool of the office drone. I print
all my automatically generated reports into xlsx files, simply because for
me it is the "paper" of 2017, you can view it on any Android phone, and
unlike PDF and like paper you can interact and work with the figures, like
add other numbers to them.
So it was automating the spreadsheet, the VBScript Excel macro that led
the way from not-programming to programming for an immense number of office
drones, who are far more numerous than sysadmins and web designers.
Aaand I think it was precisely because of those microcomputers, like the
Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had
entertained themselves in 1987 typing in some BASIC programs published in
computer mags. So when they were told Excel is programmable with a form of
BASIC they were not too intidimated
This created such a giant path dependency that still if you want to sell
a language to millions and millions of not-Real Programmers you have to at
least make it look somewhat like Basic.
I think from this angle it was a masterwork of creating and exploiting
path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn
it for fun. Create the most universal office tool. Let it be programmable
in a form of BASIC – you can just work on the screen, let it generate
a macro and then you just have to modify it. Mostly copy-pasting, not real
programming. But you slowly pick up some programming idioms. Then the path
curves up to VB and then VB.NET.
To challenge it all, one needs to find an application area as important
as number cruching and reporting in an office: Excel is basically
electronic paper from this angle and it is hard to come up with something
like this. All our nearly computer illiterate salespeople use it. (90% of
the use beyond just typing data in a grid is using the auto sum function.)
And they don't use much else than that and Word and Outlook and chat
apps.
Anyway suppose such a purpose can be found, then you can make it
scriptable in Python and it is also important to be able to record a macro
so that people can learn from the generated code. Then maybe that dominance
can be challenged. Reply
↓
Jeff Read on 2018-01-18 at
12:00:29 said: TIOBE says that while VB.NET saw an uptick in
popularity in 2011, it's on its way down now and usage was moribund
before then.
In your attempt to reframe my statements in your usual reference
frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat,
you missed my point entirely: VB.NET struggled to get a foothold during
the time when VB6 was fresh in developers' minds. It was too different
(and too C#-like) to win over VB6 devs, and didn't offer enough
value-add beyond C# to win over the people who would've just used C# or
Java. Reply
↓
I have been part of big projects with many engineers using languages with
automatic memory management. I noticed I could get something up and running in a
fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully
completed. We found ourselves fighting weird shit done by the vast pile of run
time software that was invisibly under the hood automatically doing stuff for us.
We would be fighting mysterious and arcane installation and integration
issues.
Sounds just like every Ruby on Fails deployment I've ever seen. It's great when
you're slapping together Version 0.1 of a product or so I've heard. But I've never
joined a Fails team on version 0.1. The ones I saw were already well-established,
and between the PFM in Rails itself, and the amount of monkeypatching done to
system classes, it's very, very hard to reason about the code you're looking at.
From a management level, you're asking for enormous pain trying to onboard new
developers into that sort of environment, or even expand the scope of your product
with an existing team, without them tripping all over each other.
There's a reason why Twitter switched from Rails to Scala. Reply
↓
> Hacker culture has, for decades, tried to claim it was inclusive and
nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing,
has to change.|
Observe that "has to change" in practice means that the social justice warriors take
charge.
Observe that in practice, when the social justice warriors take charge, old bugs don't
get fixed, new bugs appear, and projects turn into aimless garbage, if any development
occurs at all.
"has to change" is a power grab, and the people grabbing power are not competent to
code, and do not care about code.
Reflect on the attempted
suicide of "Coraline" It is not people like me who keep using the correct pronouns that
caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply
↓
esr on
2017-12-27 at
14:30:33 said: >"has to change" is a power grab, and the people grabbing power
are not competent to code, and do not care about code.
It's never happened before, and may very well never happen again but this once I
completely agree with JAD. The "change" the SJWs actually want – as opposed to
what they claim to want – would ruin us. Reply ↓
cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains
about, in practice usually all of them, though I suppose that as the project gets bigger,
some will slip through.
Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned
C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++
that optionally supports memory safety, hence necessarily subtly incompatible.
To turn on C++11
Place
cmake_minimum_required(VERSION 3.5)
# set standard required to ensure that you get
# the same version of C++ on every platform
# as some environments default to older dialects
# of C++ and some do not.
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
Originally i made this system because i wanted to test programming a micro kernel OS,
with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event
horizon' of memory mannagement in C.
But i didn't wait the Greenspun law to kick in, so i first developped a safe memory
system as a runtime, and replaced the standard C runtime and memory mannagement with
it.
I wanted zero seg fault or memory error possible at all anywhere in the C code. Because
debuguing bare metal exception, without debugger, with complex data structures made in C
look very close to the black hole.
I didn't want to use C++ because C++ compiler have very unpredictible binary format and
function name decoration, which make it much harder to interface with at kernel level.
I wanted also some system as efficient as possible to mannage lockless shared access
between thread of the whole memory as much as possible, to avoid the 'exclusive borrow'
syndrome of rust, with global variables shared between threads with lockless algorithm to
access them.
I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with
strong references as the norm, and direct 'bare pointer' only as weak references for fast
access to memory in C.
What i ended doing is basically a 'strongly typed hashmap DAG' to store object
references hierarchy, which can be manipulated using 'lambda expressions', in sort that
applications can manipulate objects in a indirect manner only through the DAG abstraction,
without having to manipulate bare pointers at all.
This also make a mark and sweep garbage collector easier to do, especially with an
'event based' system, the main loop can call the garbage collector between two executions
of event/messages handlers, which has the advantage that it can be made at a point where
there is no application data on the stack to mark, so it avoid mistaking application data
in the stack for a pointer. All references that are only in stack variables can get
automatically garbage collected when the function exit, much like in C++ actually.
The garbage collector can still be called by the allocator when there is OOM error, it
will attempt a garbage collection before failing the allocation, but all references in the
stack should be garbage collected when the function return to the main loop and the garbage
collector is run.
As all the references hierarchy is expressed explicity in the DAG, there shouldn't be
any pointer stored in the heap, outside of the module's data section, which correspond to C
global variables that are used as the 'root element' of object hierarchy, which can be
traversed to find all the actives references to heap data that the code can potentially
use. A quick system could be made for that the compiler can automatically generate a list
of the 'root references' in the global variables, to avoid memory leak if some global data
can look like a reference.
As each thread have their own heap, it also avoid the 'stop the world syndrome', all
threads can garbage collect their own heap, and there is already some system of lockless
synchronisation to access references based on expression in the DAG, to avoid having to
rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic
relocation, and make it easier to track active references.
It's also very useful to track memory leak, as the allocator can keep the time of each
memory allocation, it's easy to see all the allocations that happenned between two points
of the program, and dump all their hierarchy and property only from the 'bare
reference'.
Each thread contain two heaps, one which is manually mannaged, mostly used for temporary
strings , or IO buffers, and the other heap which can be mannaged either with atomic
reference count, or mark and sweep.
With this system, C program rarely have to use directly malloc/free, nor to manipulate
pointers to allocated memory directly, other than for temporary buffer allocation, like a
dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And
all the memory manipulation can be made via a runtime which keep track internally of
pointer address and size, data type and eventually a 'finalizer' function that will be
callled when the pointer is freed,
Since i started to use this system to make C programs, alongside with my own ABI which
can dynamically link binaries compiled with visual studio and gcc together, i tested it for
many different use case, i could make a mini multi thread window mannager/UI, with aysnc
irq driven HID driver events, and a system of distributed application based on blockchain
data, which include multi thread http server who can handle parrallel json/rpc calls, with
an abstraction of applications stack via custom data type definition / scripts stored on
the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi
threaded and deal with heavily dynamic data.
With the mark and sweep mode, it can become quite easy to develop multi thread
applications with good level of concurrency, even to do simple database system, driven by a
script over asynch http/json/rpc, without having to care about complex memory
mannagement.
Even with the reference count mode, the manipulation of references is explicit, and it
should not be to hard to detect leaks with simple parsers, i already did test with antlr C
parser, with a visitor class to parse the grammar and detect potentially errors, as all
memory referencing happen through specific type instead of bare pointers, it's not too hard
to detect potential memory leak problem with a simple parser. Reply ↓
Arron Grier on 2018-06-07 at 17:37:17 said: Since
you've been talking a lot about Go lately, should you not mention it on your Document: How
To Become A Hacker?
esr on
2018-06-08 at
05:48:37 said: >Since you've been talking a lot about Go lately, should you not
mention it on your Document: How To Become A Hacker?
Too soon. Go is very interesting but it's not an essential tool yet.
Yankes on 2018-12-18 at 19:20:46 said: I have
one question, do you even need global AMM? Get one of element of graph, when it will/should
be released in your reposugeon? Over all I think it is never because it usually link with
other from this graph. Overall do you check how many objects are created and released
during operations? I do not mean some temporal strings but object representing main working
set.
Depending on answer it could be if you load some graph element and it will stay
indefinitely in memory then this could easy be converted to C/C++ by simply never using
`free` for graph elements (and all problems with memory management goes out of the
windows).
If they should be released early then when it should happened? Do you have some code in
reposurgeon that purge not needed objects when not needed any more? Depend on simply
accessibility of some object do not mean it needed, many times is quite opposite.
I now working on C# application that had similar bungle like this and previous
developers "solution" was to restarting whole application instead of fixing lifetime
problems. Correct solution was C++ like code, I create object, do work and purge it
explicitly. With this non of components have memory issues now. Of corse problem there lay
with lack of knowing tools they use and not complexity of domain, but did you do analysis
what is needed and what not, and how long? AMM do not solve this.
btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D
Reply ↓
Oh hell yes. Consider, for example, the demands of loading in ad operating on
multiple repositories. Reply ↓
Yankes on 2018-12-19 at 08:56:36 said:
If I understood this correctly situations look like:
I have processes that loaded repo A, B and C and active working on each one.
Now because of some demand we need load repo D.
After we are done we back to A, B and C.
Now question is should be D data be purged?
If there are memory connection form previous repos then it will stay in memory if
not then AMM will remove all data from memory.
If this is complex graph when you have access to any element the you can crawl to
any other element of this graph (this is simplification but probably safe
assumption).
In first case (there is connection) is equivalent to not using `free` in C. Of
corse if not all graph is reachable then there will be partial purge of it memory
(let say that 10% will stay), but what will happens when you need again load repo
D? Current data avaialbe is hidden deep in other graphs and most of data is lost do
AMM. you need load everything again and now repo D size is 110%.
In case there is not connection between repos A, B, C and repo D then we can
free it entirely.
This is easy done in C++ (some kind of smart pointer that know if it pointing same
repo or other).
Do my reasoning is correct? or I miss something?
btw there BIG difference between C and C++, I can implement things in C++ that I
will NEVER be able to implement in C, example of this is my strong typed simple
script language: https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
I would need drop functionalists/protections to be able to convert this to C (or
even C++03).
Another example of this is https://github.com/fmtlib/fmt from C++ and
`printf` from C.
Both do exactly same but C++ is lot of times better and safer.
This mean if we add your statement on impossibility and my then we have:
C <<< C++ <<< Go/Python
but for me personally is more:
C <<< C++ < Go/Python
than yours:
C/C++ <<< Go/Python Reply
↓
Not much. The bigger issue is that it is fucking insane to try
anything like this in a language where the core abstractions are leaky. That
disqualifies C++. Reply
↓
Yankes on 2018-12-19 at 10:24:47
said: I only disagree with word `insane`, C++ have lot of problems like UB,
lot of corner cases, leaking abstraction, whole crap form C (and my
favorite: 1000 line errors from templates), but is not insane to work with
memory problems.
You can easy create tools that make all this problems bearable, and this
is biggest flaw in C++, many problems are solvable but not out of box. C++
is good on crating abstraction: https://www.youtube.com/watch?v=sPhpelUfu8Q
That will fit your domain then it will not leak much because it fit right
the underling problem.
And you can enforce lot of things that allow you to reason locally about
behavior of program.
In case of creating this new abstraction is indeed insane then I think
you have problems in Go too because only problem that AMM solve is
reachability of memory and how long you need for it.
btw best thing that show difference between C++03 and C++11 is
`std::vector<std::vector>`, in C++03 this is insane stupid and in
C++11 is insane clever because it have performance characteristic of
`std::vector` (thanks to `std::move`) and no problems with memory
management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in
helper class and use `v[i][j].x` that will throw on wrong index).
Reply
↓
Of the omitted language features, the designers explicitly argue against assertions and
pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful
language, encouraging instead the use of interfaces to
achieve dynamic dispatch [h] and
composition to reuse code.
Composition and delegation are in fact largely
automated by struct embedding; according to researchers Schmager et al. , this feature
"has many of the drawbacks of inheritance: it affects the public interface of objects, it is
not fine-grained (i.e, no method-level control over embedding), methods of embedded objects
cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse
it to the extent that programmers in other languages are reputed to overuse inheritance.
[61]
The designers express an openness to generic programming and note that built-in functions
are in fact type-generic, but these are treated as special cases; Pike calls this a
weakness that may at some point be changed. [53]
The Google team built at least one compiler for an experimental Go dialect with generics, but
did not release it. [96] They are
also open to standardizing ways to apply code generation. [97]
Initially omitted, the exception -like panic / recover
mechanism was eventually added, which the Go authors advise using for unrecoverable errors such
as those that should halt an entire program or server request, or as a shortcut to propagate
errors up the stack within a package (but not across package boundaries; there, error returns
are the standard API). [98
Blame the Policies, Not the Robots
By Jared Bernstein and Dean Baker - Washington Post
The claim that automation is responsible for massive job losses has been made in almost
every one of the Democratic debates. In the last debate, technology entrepreneur Andrew Yang
told of automation closing stores on Main Street and of self-driving trucks that would
shortly displace "3.5 million truckers or the 7 million Americans who work in truck stops,
motels, and diners" that serve them. Rep. Tulsi Gabbard (Hawaii) suggested that the
"automation revolution" was at "the heart of the fear that is well-founded."
When Sen. Elizabeth Warren (Mass.) argued that trade was a bigger culprit than automation,
the fact-checker at the Associated Press claimed she was "off" and that "economists mostly
blame those job losses on automation and robots, not trade deals."
In fact, such claims about the impact of automation are seriously at odds with the
standard data that we economists rely on in our work. And because the data so clearly
contradict the narrative, the automation view misrepresents our actual current challenges and
distracts from effective solutions.
Output-per-hour, or productivity, is one of those key data points. If a firm applies a
technology that increases its output without adding additional workers, its productivity goes
up, making it a critical diagnostic in this space.
Contrary to the claim that automation has led to massive job displacement, data from the
Bureau of Labor Statistics (BLS) show that productivity is growing at a historically slow
pace. Since 2005, it has been increasing at just over a 1 percent annual rate. That compares
with a rate of almost 3 percent annually in the decade from 1995 to 2005.
This productivity slowdown has occurred across advanced economies. If the robots are
hiding from the people compiling the productivity data at BLS, they are also managing to hide
from the statistical agencies in other countries.
Furthermore, the idea that jobs are disappearing is directly contradicted by the fact that
we have the lowest unemployment rate in 50 years. The recovery that began in June 2009 is the
longest on record. To be clear, many of those jobs are of poor quality, and there are people
and places that have been left behind, often where factories have closed. But this, as Warren
correctly claimed, was more about trade than technology.
Consider, for example, the "China shock" of the 2000s, when sharply rising imports from
countries with much lower-paid labor than ours drove up the U.S. trade deficit by 2.4
percentage points of GDP (almost $520 billion in today's economy). From 2000 to 2007 (before
the Great Recession), the country lost 3.4 million manufacturing jobs, or 20 percent of the
total.
Addressing that loss, Susan Houseman, an economist who has done exhaustive, evidence-based
analysis debunking the automation explanation, argues that "intuitively and quite simply,
there doesn't seem to have been a technology shock that could have caused a 20 to 30 percent
decline in manufacturing employment in the space of a decade." What really happened in those
years was that policymakers sat by while millions of U.S. factory workers and their
communities were exposed to global competition with no plan for transition or adjustment to
the shock, decimating parts of Ohio, Michigan and Pennsylvania. That was the fault of the
policymakers, not the robots.
Before the China shock, from 1970 to 2000, the number (not the share) of manufacturing
jobs held remarkably steady at around 17 million. Conversely, since 2010 and post-China
shock, the trade deficit has stabilized and manufacturing has been adding jobs at a modest
pace. (Most recently, the trade war has significantly dented the sector and worsened the
trade deficit.) Over these periods, productivity, automation and robotics all grew apace.
In other words, automation isn't the problem. We need to look elsewhere to craft a
progressive jobs agenda that focuses on the real needs of working people.
First and foremost, the low unemployment rate -- which wouldn't prevail if the automation
story were true -- is giving workers at the middle and the bottom a bit more of the
bargaining power they require to achieve real wage gains. The median weekly wage has risen at
an annual average rate, after adjusting for inflation, of 1.5 percent over the past four
years. For workers at the bottom end of the wage ladder (the 10th percentile), it has risen
2.8 percent annually, boosted also by minimum wage increases in many states and cities.
To be clear, these are not outsize wage gains, and they certainly are not sufficient to
reverse four decades of wage stagnation and rising inequality. But they are evidence that
current technologies are not preventing us from running hotter-for-longer labor markets with
the capacity to generate more broadly shared prosperity.
National minimum wage hikes will further boost incomes at the bottom. Stronger labor
unions will help ensure that workers get a fairer share of productivity gains. Still, many
toiling in low-wage jobs, even with recent gains, will still be hard-pressed to afford child
care, health care, college tuition and adequate housing without significant government
subsidies.
Contrary to those hawking the automation story, faster productivity growth -- by boosting
growth and pretax national income -- would make it easier to meet these challenges. The
problem isn't and never was automation. Working with better technology to produce more
efficiently, not to mention more sustainably, is something we should obviously welcome.
The thing to fear isn't productivity growth. It's false narratives and bad economic
policy.
Maintaining and adding new features to legacy systems developed using Maintaining and adding new features to legacy systems developed
using C/C++ is a daunting task. There are several facets to the problem -- understanding the existing class hierarchy
and global variables, the different user-defined types, and function call graph analysis, to name a few. This article discusses several
features of doxygen, with examples in the context of projects using C/C++ .
However, doxygen is flexible enough to be used for software projects developed using the Python, Java, PHP, and other languages,
as well. The primary motivation of this article is to help extract information from C/C++ sources, but it also briefly
describes how to document code using doxygen-defined tags.
Installing doxygen
You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources
from the SVN repository and build it. You have two choices for acquiring doxygen. You can download it as a pre-compiled executable
file, or you can check out sources from the SVN repository and build it.
Listing 1 shows the latter process.
Listing 1. Install and build doxygen sources
bash‑2.05$ svn co https://doxygen.svn.sourceforge.net/svnroot/doxygen/trunk doxygen‑svn
bash‑2.05$ cd doxygen‑svn
bash‑2.05$ ./configure –prefix=/home/user1/bin
bash‑2.05$ make
bash‑2.05$ make install
Show more Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH
variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure
script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as
not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to
dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has
permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources
in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to
the /usr folder. Also, you need the svn utility to check out sources. Generating documentation using doxygen
To use doxygen to generate documentation of the sources, you perform three steps. To use doxygen to generate documentation of the
sources, you perform three steps. Generate the configuration file At a shell prompt, type the command doxygen -g At a shell
prompt, type the command doxygen -g doxygen -g . This command generates a text-editable configuration file called
Doxyfile in the current directory. You can choose to override this file name, in which case the invocation should be doxygen
-g <_user-specified file="file" name_="name_"> doxygen -g <user-specified file name> , as shown in
Listing 2 .
Listing 2. Generate the default configuration file
bash‑2.05b$ doxygen ‑g
Configuration file 'Doxyfile' created.
Now edit the configuration file and enter
doxygen Doxyfile
to generate the documentation for your project
bash‑2.05b$ ls Doxyfile
Doxyfile
Show more Edit the configuration file The configuration file is structured as The configuration file is structured as
<TAGNAME> = <VALUE> , similar to the Make file format. Here are the most important tags:
<OUTPUT_DIRECTORY> : You must provide a directory name here -- for example, /home/user1/documentation -- for
the directory in which the generated documentation files will reside. If you provide a nonexistent directory name, doxygen creates
the directory subject to proper user permissions.
<INPUT> : This tag creates a space-separated list of all the directories in which the C/C++ source
and header files reside whose documentation is to be generated. For example, consider the following snippet:
Show more In this case, doxygen would read in the C/C++ sources from these two
directories. If your project has a single source root directory with multiple sub-directories, specify that folder and make the
<RECURSIVE> tag Yes .
<FILE_PATTERNS> : By default, doxygen searches for files with typical C/C++ extensions such as
.c, .cc, .cpp, .h, and .hpp. This happens when the <FILE_PATTERNS> tag has no value associated with
it. If the sources use different naming conventions, update this tag accordingly. For example, if a project convention is to use
.c86 as a C file extension, add this to the <FILE_PATTERNS> tag.
<RECURSIVE> : Set this tag to Yes if the source hierarchy is nested and you need to generate documentation for
C/C++ files at all hierarchy levels. For example, consider the root-level source hierarchy /home/user1/project/kernel,
which has multiple sub-directories such as /home/user1/project/kernel/vmm and /home/user1/project/kernel/asm. If this tag is set
to Yes , doxygen recursively traverses the hierarchy, extracting information.
<EXTRACT_ALL> : This tag is an indicator to doxygen to extract documentation even when the individual classes
or functions are undocumented. You must set this tag to Yes .
<EXTRACT_PRIVATE> : Set this tag to Yes . Otherwise, private data members of a class would not be included in
the documentation.
<EXTRACT_STATIC> : Set this tag to Yes . Otherwise, static members of a file (both functions and variables) would
not be included in the documentation.
Listing 3. Sample doxyfile with user-provided tag values
OUTPUT_DIRECTORY = /home/user1/docs
EXTRACT_ALL = yes
EXTRACT_PRIVATE = yes
EXTRACT_STATIC = yes
INPUT = /home/user1/project/kernel
#Do not add anything here unless you need to. Doxygen already covers all
#common formats like .c/.cc/.cxx/.c++/.cpp/.inl/.h/.hpp
FILE_PATTERNS =
RECURSIVE = yes
Show more Run doxygen Run doxygen in the shell prompt as Run doxygen in the shell prompt as doxygen Doxyfile
(or with whatever file name you've chosen for the configuration file). Doxygen issues several messages before it finally produces
the documentation in Hypertext Markup Language (HTML) and Latex formats (the default). In the folder that the <OUTPUT_DIRECTORY>
tag specifies, two sub-folders named html and latex are created as part of the documentation-generation process.
Listing 4 shows a sample doxygen run log.
Listing 4. Sample log output from doxygen
Searching for include files...
Searching for example files...
Searching for images...
Searching for dot files...
Searching for files to exclude
Reading input files...
Reading and parsing tag files
Preprocessing /home/user1/project/kernel/kernel.h
Read 12489207 bytes
Parsing input...
Parsing file /project/user1/project/kernel/epico.cxx
Freeing input...
Building group list...
..
Generating docs for compound MemoryManager::ProcessSpec
Generating docs for namespace std
Generating group index...
Generating example index...
Generating file member index...
Generating namespace member index...
Generating page index...
Generating graph info page...
Generating search index...
Generating style sheet...
Show more Documentation output formats Doxygen can generate documentation in several output formats other than HTML. You can
configure doxygen to produce documentation in the following formats: Doxygen can generate documentation in several output formats
other than HTML. You can configure doxygen to produce documentation in the following formats:
UNIX man pages: Set the <GENERATE_MAN> tag to Yes . By default, a sub-folder named man is created within
the directory provided using <OUTPUT_DIRECTORY> , and the documentation is generated inside the folder. You must
add this folder to the MANPATH environment variable.
Rich Text Format (RTF): Set the <GENERATE_RTF> tag to Yes . Set the <RTF_OUTPUT> to wherever you
want the .rtf files to be generated -- by default, the documentation is within a sub-folder named rtf within the OUTPUT_DIRECTORY.
For browsing across documents, set the <RTF_HYPERLINKS> tag to Yes . If set, the generated .rtf files contain links
for cross-browsing.
Latex: By default, doxygen generates documentation in Latex and HTML formats. The <GENERATE_LATEX> tag is set
to Yes in the default Doxyfile. Also, the <LATEX_OUTPUT> tag is set to Latex, which implies that a folder named
latex would be generated inside OUTPUT_DIRECTORY, where the Latex files would reside.
Microsoft® Compiled HTML Help (CHM) format: Set the <GENERATE_HTMLHELP> tag to Yes . Because this format is not
supported on UNIX platforms, doxygen would only generate a file named index.hhp in the same folder in which it keeps the
HTML files. You must feed this file to the HTML help compiler for actual generation of the .chm file.
Extensible Markup Language (XML) format: Set the <GENERATE_XML> tag to Yes . (Note that the XML output is still
a work in progress for the doxygen team.)
Listing 5 provides an example of a Doxyfile
that generates documentation in all the formats discussed.
Listing 5. Doxyfile with tags for generating documentation in several formats
Show more Special tags in doxygen Doxygen contains a couple of special tags. Doxygen contains a couple of special tags.
Preprocessing C/C++ code First, doxygen must preprocess First, doxygen must preprocess C/C++ code to extract information.
By default, however, it does only partial preprocessing -- conditional compilation statements ( #if #endif ) are evaluated,
but macro expansions are not performed. Consider the code in
Listing 6 .
Show more With With With With <USE_ROPE> defined in sources, generated documentation from doxygen looks like this:
Defines
#define USE_ROPE
#define STRING std::rope
Variables
static STRING name
Show more Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see
that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed
a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation
but has not done a macro expansion of STRING . The <ENABLE_PREPROCESSING> tag in the Doxyfile is set by
default to Yes . To allow for macro expansions, also set the <MACRO_EXPANSION> tag to Yes . Doing so produces
this output from doxygen:
Defines
#define USE_ROPE
#define STRING std::string
Variables
static std::rope name
Show more If you set the If you set the If you set the If you set the <ENABLE_PREPROCESSING> tag to No , the
output from doxygen for the earlier sources looks like this:
Variables
static STRING name
Show more Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation
now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is
not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type
of STRING . It thus makes sense always to set the <ENABLE_PREPROCESSING> tag to Yes . As part of the documentation,
it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be
desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to
expand only specific macros. For such purposes, along setting <ENABLE_PREPROCESSING> and <MACRO_EXPANSION>
to Yes , you must set the <EXPAND_ONLY_PREDEF> tag to Yes (this tag is set to No by default) and provide the
macro details as part of the <PREDEFINED> or <EXPAND_AS_DEFINED> tag. Consider the code in
Listing 7 , where only the macro
CONTAINER would be expanded.
Show more Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only Here's the
doxygen output with only CONTAINER expanded:
Show more Notice that only the Notice that only the Notice that only the Notice that only the CONTAINER macro has been
expanded. Subject to <MACRO_EXPANSION> and <EXPAND_AS_DEFINED> both being Yes , the <EXPAND_AS_DEFINED>
tag selectively expands only those macros listed on the right-hand side of the equality operator. As part of preprocessing, the final
tag to note is As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is <PREDEFINED>
. Much like the same way you use the -D switch to pass the G++ compiler preprocessor definitions, you use this tag to
define macros. Consider the Doxyfile in Listing
9 .
Listing 9. Doxyfile with macro expansion tags defined
Show more Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the
doxygen-generated output:
Show more When used with the When used with the When used with the When used with the <PREDEFINED> tag, macros should
be defined as <_macro name_="name_">=<_value_> <macro name>=<value> . If no value is provided -- as in the case of simple
#define -- just using <_macro name_="name_">=<_spaces_> <macro name>=<spaces> suffices. Separate multiple
macro definitions by spaces or a backslash ( \ ). Excluding specific files or directories from the documentation
process In the In the <EXCLUDE> tag in the Doxyfile, add the names of the files and directories for which documentation
should not be generated separated by spaces. This comes in handy when the root of the source hierarchy is provided and some sub-directories
must be skipped. For example, if the root of the hierarchy is src_root and you want to skip the examples/ and test/memoryleaks folders
from the documentation process, the Doxyfile should look like
Listing 10 .
Listing 10. Using the EXCLUDE tag as part of the Doxyfile
Show more Generating graphs and diagrams By default, the Doxyfile has the By default, the Doxyfile has the <CLASS_DIAGRAMS>
tag set to Yes . This tag is used for generation of class hierarchy diagrams. The following tags in the Doxyfile deal with generating
diagrams:
<CLASS_DIAGRAMS> : The default tag is set to Yes in the Doxyfile. If the tag is set to No , diagrams for inheritance
hierarchy would not be generated.
<HAVE_DOT> : If this tag is set to Yes , doxygen uses the dot tool to generate more powerful graphs, such as
collaboration diagrams that help you understand individual class members and their data structures. Note that if this tag is set
to Yes , the effect of the <CLASS_DIAGRAMS> tag is nullified.
<CLASS_GRAPH> : If the <HAVE_DOT> tag is set to Yes along with this tag, the inheritance hierarchy
diagrams are generated using the dot tool and have a richer look and feel than what you'd get by using only
<CLASS_DIAGRAMS> .
<COLLABORATION_GRAPH> : If the <HAVE_DOT> tag is set to Yes along with this tag, doxygen generates
a collaboration diagram (apart from an inheritance diagram) that shows the individual class members (that is, containment) and
their inheritance hierarchy.
Listing 11 provides an example using
a few data structures. Note that the <HAVE_DOT> , <CLASS_GRAPH> , and <COLLABORATION_GRAPH>
tags are all set to Yes in the configuration file.
Listing 11. Interacting C classes and structures
struct D {
int d;
};
class A {
int a;
};
class B : public A {
int b;
};
class C : public B {
int c;
D d;
};
Figure 1. The Class inheritance graph and collaboration graph generated using the dot tool
Code documentation style So far, you've used doxygen to extract information from code that is otherwise undocumented. However,
doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses
some of the more common tags doxygen advocates using as part of So far, you've used doxygen to extract information from code that
is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed
documentation. This section discusses some of the more common tags doxygen advocates using as part of C/C++ code. For
further details, see resources on the right. Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions
are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds
of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third
kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are
typically single lines. Functions and class methods have a third kind of description known as the in-body description, which
is a concatenation of all comment blocks found within the function body. Some of the more common doxygen tags and styles of commenting
are:
Brief description: Use a single-line C++ comment, or use the <\brief> tag.
Detailed description: Use JavaDoc-style commenting /** test */ (note the two asterisks [ * ] in
the beginning) or the Qt-style /*! text */ .
In-body description: Individual C++ elements like classes, structures, unions, and namespaces have their own
tags, such as <\class> , <\struct> , <\union> , and <\namespace> .
To document global functions, variables, and enum types, the corresponding file must first be documented using the To document global
functions, variables, and enum types, the corresponding file must first be documented using the <\file> tag.
Listing 12 provides an example that discusses
item 4 with a function tag ( <\fn> ), a function argument tag ( <\param> ), a variable name tag (
<\var> ), a tag for #define ( <\def> ), and a tag to indicate some specific issues related to a
code snippet ( <\warning> ).
Listing 12. Typical doxygen tags and their use
/∗! \file globaldecls.h
\brief Place to look for global variables, enums, functions
and macro definitions
∗/
/∗∗ \var const int fileSize
\brief Default size of the file on disk
∗/
const int fileSize = 1048576;
/∗∗ \def SHIFT(value, length)
\brief Left shift value by length in bits
∗/
#define SHIFT(value, length) ((value) << (length))
/∗∗ \fn bool check_for_io_errors(FILE∗ fp)
\brief Checks if a file is corrupted or not
\param fp Pointer to an already opened file
\warning Not thread safe!
∗/
bool check_for_io_errors(FILE∗ fp);
Show more
Here's how the generated documentation looks:
Defines
#define SHIFT(value, length) ((value) << (length))
Left shift value by length in bits.
Functions
bool check_for_io_errors (FILE ∗fp)
Checks if a file is corrupted or not.
Variables
const int fileSize = 1048576;
Function Documentation
bool check_for_io_errors (FILE∗ fp)
Checks if a file is corrupted or not.
Parameters
fp: Pointer to an already opened file
Warning
Not thread safe!
Show more Conclusion
This article discusses how doxygen can extract a lot of relevant information from legacy C/C++ code. If the code
is documented using doxygen tags, doxygen generates output in an easy-to-read format. Put to good use, doxygen is a ripe candidate
in any developer's arsenal for maintaining and managing legacy systems.
Whenever a boss acts like a dictator – shutting down, embarrassing, or firing anyone
who dares to challenge the status quo – you've got a toxic workplace problem. And that's
not just because of the boss' bad behavior, but because that behavior creates an environment in
which everyone is scared, intimidated and often willing to throw their colleagues under the
bus, just to stay on the good side of the such bosses.
... ... ...
10 Signs your workplace culture is Toxic
Company core values do not serve as the basis for how the organization functions.
Employee suggestions are discarded. People are afraid to give honest feedback.
Micromanaging -Little to no autonomy is given to employees in performing their jobs.
Blaming and punishment from management is the norm.
Excessive absenteeism, illness and high employee turn over.
Overworking is a badge of honor and is expected.
Little or strained interaction between employees and management.
Computer programmer Steve Relles has the poop on what to do when your job is outsourced to
India. Relles has spent the past year making his living scooping up dog droppings as the
"Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper
scooper," "I can clean four to five yards in a hour if they are close together." Relles, who
lost his computer programming job about three years ago ... has over 100 clients who pay $10
each for a once-a-week cleaning of their yard.
Relles competes for business with another local company called "Scoopy Do." Similar
outfits have sprung up across America, including Petbutler.net, which operates in Ohio.
Relles says his business is growing by word of mouth and that most of his clients are women
who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my
favorite customers since they poop in large piles which are easy to find," Relles said. "It
sure beats computer programming because it's flexible, and I get to be outside,"
Eugene Miya , A
friend/colleague. Sometimes driver. Other shared experiences.
Updated Mar 22 2017 · Author has 11.2k answers and 7.9m answer views
He mostly writes in C today.
I can assure you he at least knows about Python. Guido's office at Dropbox is 1 -- 2 blocks
by a backdoor gate from Don's house.
I would tend to doubt that he would use R (I've used S before as one of my stat packages).
Don would probably write something for himself.
Don is not big on functional languages, so I would doubt either Haskell (sorry Paul) or LISP
(but McCarthy lived just around the corner from Don; I used to drive him to meetings; actually,
I've driven all 3 of us to meetings, and he got his wife an electric version of my car based on
riding in my car (score one for friend's choices)). He does use emacs and he does write MLISP
macros, but he believes in being closer to the hardware which is why he sticks with MMIX (and
MIX) in his books.
Don't discount him learning the machine language of a given architecture.
I'm having dinner with Don and Jill and a dozen other mutual friends in 3 weeks or so (our
quarterly dinner). I can ask him then, if I remember (either a calendar entry or at job). I try
not to bother him with things like this. Don is well connected to the hacker community
Don's name was brought up at an undergrad architecture seminar today, but Don was not in the
audience (an amazing audience; I took a photo for the collection of architects and other
computer scientists in the audience (Hennessey and Patterson were talking)). I came close to
biking by his house on my way back home.
We do have a mutual friend (actually, I introduced Don to my biology friend at Don's
request) who arrives next week, and Don is my wine drinking proxy. So there is a chance I may
see him sooner.
Don Knuth would want to use something that’s low level, because details matter
. So no Haskell; LISP is borderline. Perhaps if the Lisp machine ever had become a thing.
He’d want something with well-defined and simple semantics, so definitely no R. Python
also contains quite a few strange ad hoc rules, especially in its OO and lambda features. Yes
Python is easy to learn and it looks pretty, but Don doesn’t care about superficialities
like that. He’d want a language whose version number is converging to a mathematical
constant, which is also not in favor of R or Python.
What remains is C. Out of the five languages listed, my guess is Don would pick that one.
But actually, his own old choice of Pascal suits him even better. I don’t think any
languages have been invented since T E X was written that score higher on the Knuthometer than Knuth’s own original pick.
And yes, I feel that this is actually a conclusion that bears some thinking about. 24.1k
views ·
Dan
Allen , I've been programming for 34 years now. Still not finished.
Answered Mar 9, 2017 · Author has 4.5k answers and 1.8m answer views
In The Art of Computer Programming I think he'd do exactly what he did. He'd invent his own
architecture and implement programs in an assembly language targeting that theoretical
machine.
He did that for a reason because he wanted to reveal the detail of algorithms at the lowest
level of detail which is machine level.
He didn't use any available languages at the time and I don't see why that would suit his purpose now. All the languages
above are too high-level for his purposes.
"... The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes: ..."
"... When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest." ..."
"... The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like: ..."
"... Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft. ..."
"... Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. ..."
"... And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓. ..."
"... Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing. ..."
"... In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots " ..."
"... This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything. ..."
At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family
feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes
that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest
was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union,
meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash
victims.
However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:
And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining
stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to
the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract
dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the
plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.
We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development
of the airplane. It has damning passages like:
Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure
condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed
erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without
pilot input or knowledge.
The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air
crashes:
Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft
as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA
known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing
to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed
to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three
years later.
And (boldface original):
Boeing Set SWAPA Pilots Up to Fail
As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.
Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew
Operations Manual.
There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background
to override pilot inputs.
There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.
And there was no way for them to know of the terrifying consequences that would follow from a malfunction.
When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against
disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more
technical data -- than [they] needed or could realistically digest."
SWAPA's pilots, like their counterparts all over the world, were set up for failure
The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane
less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored
warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to
the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:
By March 2016, Boeing settled on a revision of the MCAS flight control logic.
However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing
KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.
The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with
limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or
cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou
don't want the solution to be worse than the initial problem."
The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater
control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's
two sensors that measure the angle of the plane's nose.
In other words, Boeing can't credibly say that it didn't know better.
Here is one of the sections describing Boeing's cover-ups:
Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions
to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS
itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so
as to further hide its existence from the public and pilots.
We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's
737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA
pilots and made misrepresentations about the aircraft.
Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max
and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on
Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots'
lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after
all.
But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside
of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story
regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12%
of airline customers, they represent an estimated 75% of profits.
Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel,
point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that
combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the
737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in
terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence
in the grounded plane.
Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve
to be compensated for lost flying time, but they plan to obtain it through American Airlines.
If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is
as good as mine as to how long the legal arm-wrestling goes on.
Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing
went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again.
Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.
However, Boeing's actions were inconsistent with this claim. From the filing:
Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition
likely could exist or develop on 737 MAX aircraft.
Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim
such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating
Procedures of the 737 MAX manual.
But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash,
and would lead to another crash and the 737 MAX's grounding just months later.
An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas
MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft
tail down again.
Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and
relied upon in earlier generations of 737 aircraft.
Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as
demonstrated by a second MCAS-caused crash.
We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion
Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless
to say, this assertion has been shown to be incorrect.
Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche
Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".
And making
the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly
true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's
a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude
✓.
And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal.
And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There
are times I just want to be beamed back to the home planet. Where we care for each other.
One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future
of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the
idea that the pilots had nothing to do with the accidents.
As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to
be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally
silent–either for legal reasons or CYA reasons on the part of their board and bad management.
Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit
above safety all else , that is glaringly obvious to everyone except Boeing.
"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with
limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or
cause a pilot to lose control "
"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions
to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS
itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual
so as to further hide its existence from the public and pilots "
This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial
airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the
automation crapification of everything.
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
What the hell is DevOps? Every couple months I find myself trying to look into it as all I
ever hear and see about is DevOps being the way forward. But each time I research it I can only
find things talking about streamlining software updates and quality assurance and yada yada
yada. It seems like DevOps only applies to companies that make software as a product. How does
that affect me as a sysadmin for higher education? My "company's" product isn't software.
Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to
offer me? Again, when I try to research them a majority of what I find just links back to
software development.
To give a rough idea of what I deal with, below is a list of my three main
responsibilities.
macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+
machines)
Network Administration (I just started with this a couple months ago and I'm slowly
learning about our infrastructure and network administration in general from our IT
director. We have several buildings spread across our entire campus with a mixture of
Juniper, Dell, and Brocade equipment.)
AV Systems Design and Programming (I'm the only person who does anything related to
video conferencing, meeting room equipment, presentation systems, digital signage, etc. for
7 buildings.)
So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn,
but it gets so overwhelming trying to find information because everything I find just assumes
you're a software developer with all this prerequisite knowledge. Additionally, how the hell do
you find the time to learn all of this? It seems like new DevOps software or platforms or
whatever you call them spin up every single month. I'm already in the middle of trying to learn
JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in
addition to networking concepts in general), and AV design stuff (like Crestron programming).
I've been working at the same job for 5 years and I feel like I'm being left in the dust by the
entire rest of the industry. I'm being pulled in so many different directions that I feel like
it's impossible for me to ever get another job. At the same time, I can't specialize in
anything because I have so many different unrelated areas I'm supposed to be doing work in.
And this is what I go through/ask myself every few months I try to research and learn
DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing
to offer. Thanks in advance.
there's a lot of tools that can be used to make your life much easier that's used on a
daily basis for DevOps, but apparently that's not the case for you. when you manage infra as
code, you're using DevOps.
there's a lot of space for operations guys like you (and me) so look to DevOps as an
alternative source of knowledge, just to stay tuned on the trends of the industry and improve
your skills.
for higher education, this is useful for managing large projects and looking for
improvement during the development of the product/service itself. but again, that's not the
case for you. if you intend to switch to another position, you may try to search for a
certification program that suits your needs
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first
few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and
polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the
Java class.
Each of our previous articles had a common thread. We have written about the strengths and benefits of
the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we
do not want to give anyone a false sense that object-oriented techniques are always the perfect answer.
Object-oriented techniques are not the magic "silver bullets" of programming.
In the programming world, the term silver bullet refers to a technology or methodology that is touted as the
ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically
make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more
impressive, it will do all of this without any effort on your part!
Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never
will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and
architecture.
If anything, using OO makes design and architecture more important because without a clear, well-planned design,
OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble
for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned
system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier
maintenance.
Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO
isn't flawed, but some of the hype has given OO developers and managers a false sense of security.
Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of
OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential
pitfalls.
Fallacies of OO
It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow
these common fallacies to mislead you.
OO will insure the success of your project: An object-oriented approach to software development does not guarantee
the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only
careful analysis and a complete understanding of the problem will make the project succeed. A successful project will
utilize sound techniques, competent programmers, sound processes and solid project management.
OO makes you a better programmer: OO does not make a programmer better. Only experience can do that. A coder might
know all of the OO lingo and syntactical tricks, but if he or she doesn't know when and where to employ these
features, the resulting code will be error-prone and difficult for others to maintain and reuse.
OO-derived software is superior to other forms of software: OO techniques do not make good software; features make
good software. You can use every OO trick in the book, but if the application lacks the features and functionality
users need, no one will use it.
OO techniques mean you don't need to worry about business plans: Before jumping onto the object bandwagon, be
certain to conduct a full business analysis. Don't go in without careful consideration or on the faith of marketing
hype. It is critical to understand the costs as well as the benefits of object-oriented development. If you plan for
only one or two internal development projects, you will see few of the benefits of reuse. You might be able to use
preexisting object-oriented technologies, but rolling your own will not be cost effective.
OO will cure your corporate ills: OO will not solve morale and other corporate problems. If your company suffers
from motivational or morale problems, fix those with other solutions. An OO Band-Aid will only worsen an already
unfortunate situation.
OO Pitfalls
Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object
technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly,
however, the results can be disastrous.
OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort
to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your
skills, you will fall behind.
OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development
costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you
might be disappointed in your initial OO results.
OO technologies might not fit your corporate culture: The successful application of OO requires that your
development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable
objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work
or if you'll never reap the benefits of it.
OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger
and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are
dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to
improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However,
if you're developing an application that tracks millions of data points in real time, OO might not be the answer for
you.
OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every
situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to
design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO
technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in
protecting you from making a mistake.
What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware
of the hype. Use an OO approach only when appropriate.
Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to
use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm
than good.
Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of
"Hooked on Objects."
David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based
object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member
specializing in telecommunications consulting for ObjectWave. Contact them at [email protected] or visit their Web
site at www.objectwave.com.
This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO
design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments.
But it's still good.
Props to the
artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up
with everything.
He missed "sea
change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended
distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.
For those too
young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were
hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in
corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.
"The company has
undergone organization optimization due to our strategy modification, which includes empowering the support to the
operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.
In his big long
career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most
ambitious songs in pop music history.
This should be
played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius
as usual, Mr. Yankovic!
There's a quote
it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly
it and it's brilliant.
From the current
Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with
urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be
driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment
to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to
capital allocation."" yeah Weird Al totally nailed it
Excuse me, but
"proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you
of anything like that. [pause] I'm fired, aren't I?~George Meyer
I watch this at
least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to
seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment
Associates" eventually to discover they want someone to run a cash register and sweep up.
The irony is a
song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes
via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous
excesses of Unbridled Capitalism.