Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Software Development Methodologies

News

News

Recommended Links Parkinson Law Waterfall model Extreme programming Prototyping Spiral model
Software Life Cycle Models Recommended Links Parkinson Law Agile -- Fake Solution to an Important Problem Prototyping Spiral model
Cargo cult programming Science, PseudoScience and Society Lysenkoism Information Technology Hype Software Fashion Bootlickocracy  CMM (Capability Maturity Model)
Brooks law Conway Law Relationship of Brooks Law and Conway Law Software realism vs software idealism The Mythical Man-Month The True Believer The Peter Principle
Program Understanding OSS development model Project Management Reverse Engineering Links SE quotes Humor Etc

 
  Methodologies are just a stick with which to beat developers. 

A software life cycle model depicts the significant phases or activities of a software project from conception until the product is retired. It specifies the relationships between project phases, including transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables. Typically, a life cycle model addresses the  phases of a software project: requirements phase, design phase, implementation, integration, testing, operations and maintenance. Much of the motivation behind utilizing a life cycle model is to provide structure to avoid the problems of the "undisciplined hacker" or corporate IT bureaucrat (which is probably ten times dangerous then undisciplined hacker).  As always, it's a matter of picking the right tool for the job, rather than picking up your hammer and treating everything as a nail.

 

Software life cycle models describe the interrelationships between software development phases. The common life cycle models are:

The "waterfall model"  was probably the first published model and as a specific model for military it was not as naive as some proponents of other models suggest. The model was developed to help cope with the increasing complexity of aerospace products. The waterfall model follows a documentation driven paradigm and reflects the existence of huge military bureaucracy and respective diffusion of responsibilities. Still if the problem is well known and well studied it has its place despite obvious deficiencies.  "Technological surprises" and, especially, significant  changes in specification is a killer for this model and lead to tremendous waist of resources and cost overruns.

Prototyping model was probably the first realistic of early models because many aspects of the syst4m are unclear until a working prototype is developed. It was advocated by Brooks in early 60th.

A better model, the "spiral model" was suggested by Boehm in 1985. The spiral model is a variant of "dialectical spiral" and as such provides useful insights into the life cycle of the system.  But it also presuppose unlimited resources for the project.  No organization can perform more then a couple iterations during the initial development of the system.  the first iteration is usually called prototype.

Prototype based development requires more talented managers and good planning while waterfall model works (or does not work) with bad or stupid managers works just fine as the success in this model is more determined by the nature of the task in hand then any organizational circumstances. Like always humans are flexible and programmer in waterfall model can use guerilla methods of enforcing a sound architecture as manager is actually a hostage of the model and cannot afford to look back and re-implement anything substantial.

Because the life cycle steps are described in very general terms, the models are adaptable and their implementation details will vary among different organizations. The spiral model is the most general. Most life cycle models can in fact be derived as special instances of the spiral model. Organizations may mix and match different life cycle models to develop a model more tailored to their products and capabilities.  there is nothing wrong about using waterfall model for some components of the complex project that are relatively well understood and straitforward. But mixing and matching definitely needs a certain level of software management talent.

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)


[May 25, 2012] OpEd Programming is not Team Sports

May 25, 2012 | Perlmonks

BrowserUk:

I think that too many developers derisively dismiss The Waterfall Model as being somehow oldy-moldy and out of fashion.

The Waterfall Model doesn't work. I'm not citing (just) my own opinion here, but rather that of the man, Dr. Winston W. Royce, that first described the waterfall model.

Yep! The guy that 'invented' the Waterfall Model, said it didn't work. Indeed, when he first described it in his 1970 paper:"Managing The Development Of Large Software Systems", he did so explicitly to show why it didn't work, and what needed to be done to correct the methods inherent, designed-in, causes of failure.

See this For a potted history of how the mis-citing of the Royce paper, lead to it getting accidentally adopted by the US military in the early 70's; and thence forth by many other organisations who blindly copied them; before being univerally abandoned by all of them in the mid to late 80's because if failed so badly, so often.

"Those who fail to learn from history are doomed to repeat it." -- Sir Winston Churchill
zentara:
The common thread in each of these, it seems to me, is that "we are being paid to Write Code, therefore let us Write Code constantly." Let's go from Something We Can Release to Something Else We Can Release every few hours.

I think you are witnessing the current Tech Bubble, wherein all the overpaid, over-hyped, under-worked programmers are desparately trying to justify their jobs.

It also has some connection to Wall Street, where traders are desparate to find an industry wherein they can blow big bubbles.

Did you get zucked by Facebook's IPO? :-)


I'm not really a human, but I play one on earth.
Old Perl Programmer Haiku ................... flash japh

CMMI: from Conventional to Modern Software Management - article originally published in The Rational Edge, February 2002

Top Ten Principles of Conventional (Waterfall) Software Management

  1. Freeze requirements before design. This is the essence of a requirements-first process: The project team strives to provide a precise requirements definition and then implement exactly those requirements. Changing requirements can cause significant breakage in the code and test phases; consequently, requirements must be completely and unambiguously specified before the team makes major investments in other design and development activities.

  2. Avoid coding prior to detailed design review. Again, because design changes can also cause significant breakage in the code and test phases, the team needs to ensure that the whole design is mature and complete before beginning the coding phase, when there will be much more resistance to change.

  3. Use a higher-order programming language. Higher-order programming languages avoid a substantial set of error sources (through advanced data typing, interface separation, and packaging and programming constructs) and permit the software solution to be "programmed" in fewer lines of human-generated code.

  4. Complete unit testing before integration. Whereas the design flows "top down," the test process flows "bottom-up": The smallest units are completely tested prior to delivery for integration testing. This sequencing constraint is an attempt to capture more bugs at the unit level, prior to integration, when they can cause substantially more scrap and rework.

  5. Maintain detailed traceability among all artifacts. To ensure that program completeness and consistency can be maintained at each stage, the requirements artifacts need to be traced to design artifacts and test artifacts. When changes are proposed or identified downstream, this provides a full view of the change's actual or potential impact for assessment.

  6. Document and maintain the design. Design without documentation is not design. In early phases, the documentation is the design. In later phases, as code becomes the primary engineering artifact, design artifacts must be updated to ensure consistency and provide a basis for decision making about changes.

  7. Assess quality with an independent team. To maintain a separate reporting chain from the analysts, designers, and testers, the project should assign to an independent team responsibility for ensuring overall adherence to quality standards -- for both the product and the process.

  8. Inspect everything. Inspecting the detailed design and code is a much better way to find errors than testing. Ensure that inspections cover all requirements, design, code, and test artifacts.

  9. Plan everything early with high fidelity. A complete, precise plan down to the "inch-pebble" level that lays out detailed activities and artifacts over the entire schedule is necessary to identify critical paths, manage risks, and evaluate programmatic changes.

  10. Control source code baselines rigorously. Once artifacts get into the coded stage, rigorous configuration management is necessary to maintain baseline control of formal releases in the test process, and to transition the product to a zero-defect state suitable for release.

Comparison of Software Development Methodologies - January 1995

This article introduces and compares software development methodologies. This information will help you recognize which methodologies may be best suited for use in various situations. The target audience is all personnel involved in software development for the DoD. No attempt has been made to cover methodologies that are most applicable to smaller projects that can be accomplished by five or fewer software engineers. You may learn about several of these in [10]. Since the intended readers should consider government software standards in their use of methodologies, background on these standards is included.

What's the 'spiral model'

What's the 'spiral model'?


Date: 10 Oct 1998
Archive file: spiral

(1) Barry Boehm, "A Spiral Model of Software Development and Enhancement",
ACM SIGSOFT Software Engineering Notes, August 1986.
(2) Barry Boehm "A Spiral Model of Software Development and Enhancement"
IEEE Computer, vol.21, #5, May 1988, pp 61-72.

Basically, the idea is evolutionary development, using the waterfall model for each step; it's intended to help manage risks. Don't define in detail the entire system at first. The developers should only define the highest priority features. Define and implement those, then get feedback from users/customers (such feedback distinguishes "evolutionary" from "incremental" development). With this knowledge, they should then go back to define and implement more features in smaller chunks.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Internal

External

Recommended Papers

What is Extreme Programming

All the contributors to an XP project sit together, members of one team. This team must include a business representative -- the "Customer" -- who provides the requirements, sets the priorities, and steers the project. It's best if the Customer or one of her aides is a real end user who knows the domain and what is needed. The team will of course have programmers. The team may include testers, who help the Customer define the customer acceptance tests. Analysts may serve as helpers to the Customer, helping to define the requirements. There is commonly a coach, who helps the team keep on track, and facilitates the process. There may be a manager, providing resources, handling external communication, coordinating activities. None of these roles is necessarily the exclusive property of just one individual: Everyone on an XP team contributes in any way that they can. The best teams have no specialists, only general contributors with special skills.

Planning Game

XP planning addresses two key questions in software development: predicting what will be accomplished by the due date, and determining what to do next. The emphasis is on steering the project -- which is quite straightforward -- rather than on exact prediction of what will be needed and how long it will take -- which is quite difficult. There are two key planning steps in XP, addressing these two questions:

Release Planning is a practice where the Customer presents the desired features to the programmers, and the programmers estimate their difficulty. With the costs estimates in hand, and with knowledge of the importance of the features, the Customer lays out a plan for the project. Initial release plans are necessarily imprecise: neither the priorities nor the estimates are truly solid, and until the team begins to work, we won't know just how fast they will go. Even the first release plan is accurate enough for decision making, however, and XP teams revise the release plan regularly.

Iteration Planning is the practice whereby the team is given direction every couple of weeks. XP teams build software in two-week "iterations", delivering running useful software at the end of each iteration. During Iteration Planning, the Customer presents the features desired for the next two weeks. The programmers break them down into tasks, and estimate their cost (at a finer level of detail than in Release Planning). Based on the amount of work accomplished in the previous iteration, the team signs up for what will be undertaken in the current iteration.

These planning steps are very simple, yet they provide very good information and excellent steering control in the hands of the Customer. Every couple of weeks, the amount of progress is entirely visible. There is no "ninety percent done" in XP: a feature story was completed, or it was not. This focus on visibility results in a nice little paradox: on the one hand, with so much visibility, the Customer is in a position to cancel the project if progress is not sufficient. On the other hand, progress is so visible, and the ability to decide what will be done next is so complete, that XP projects tend to deliver more of what is needed, with less pressure and stress.

Customer Tests

As part of presenting each desired feature, the XP Customer defines one or more automated acceptance tests to show that the feature is working. The team builds these tests and uses them to prove to themselves, and to the customer, that the feature is implemented correctly. Automation is important because in the press of time, manual tests are skipped. That's like turning off your lights when the night gets darkest.

The best XP teams treat their customer tests the same way they do programmer tests: once the test runs, the team keeps it running correctly thereafter. This means that the system only improves, always notching forward, never backsliding.

Small Releases

XP teams practice small releases in two important ways:

First, the team releases running, tested software, delivering business value chosen by the Customer, every iteration. The Customer can use this software for any purpose, whether evaluation or even release to end users (highly recommended). The most important aspect is that the software is visible, and given to the customer, at the end of every iteration. This keeps everything open and tangible.

Second, XP teams release to their end users frequently as well. XP Web projects release as often as daily, in house projects monthly or more frequently. Even shrink-wrapped products are shipped as often as quarterly.

It may seem impossible to create good versions this often, but XP teams all over are doing it all the time. See Continuous Integration for more on this, and note that these frequent releases are kept reliable by XP's obsession with testing, as described here in Customer Tests and Test-Driven Development.

Simple Design

XP teams build software to a simple design. They start simple, and through programmer testing and design improvement, they keep it that way. An XP team keeps the design exactly suited for the current functionality of the system. There is no wasted motion, and the software is always ready for what's next.

Design in XP is not a one-time thing, or an up-front thing, it is an all-the-time thing. There are design steps in release planning and iteration planning, plus teams engage in quick design sessions and design revisions through refactoring, through the course of the entire project. In an incremental, iterative process like Extreme Programming, good design is essential. That's why there is so much focus on design throughout the course of the entire development.

Pair Programming

All production software in XP is built by two programmers, sitting side by side, at the same machine. This practice ensures that all production code is reviewed by at least one other programmer, and results in better design, better testing, and better code.

It may seem inefficient to have two programmers doing "one programmer's job", but the reverse is true. Research into pair programming shows that pairing produces better code in about the same time as programmers working singly. That's right: two heads really are better than one!

Some programmers object to pair programming without ever trying it. It does take some practice to do well, and you need to do it well for a few weeks to see the results. Ninety percent of programmers who learn pair programming prefer it, so we highly recommend it to all teams.

Pairing, in addition to providing better code and tests, also serves to communicate knowledge throughout the team. As pairs switch, everyone gets the benefits of everyone's specialized knowledge. Programmers learn, their skills improve, they become move valuable to the team and to the company. Pairing, even on its own outside of XP, is a big win for everyone.

Test-Driven Development

Extreme Programming is obsessed with feedback, and in software development, good feedback requires good testing. Top XP teams practice "test-driven development", working in very short cycles of adding a test, then making it work. Almost effortlessly, teams produce code with nearly 100 percent test coverage, which is a great step forward in most shops. (If your programmers are already doing even more sophisticated testing, more power to you. Keep it up, it can only help!)

It isn't enough to write tests: you have to run them. Here, too, Extreme Programming is extreme. These "programmer tests", or "unit tests" are all collected together, and every time any programmer releases any code to the repository (and pairs typically release twice a day or more), every single one of the programmer tests must run correctly. One hundred percent, all the time! This means that programmers get immediate feedback on how they're doing. Additionally, these tests provide invaluable support as the software design is improved.

Design Improvement

Extreme Programming focuses on delivering business value in every iteration. To accomplish this over the course of the whole project, the software must be well-designed. The alternative would be to slow down and ultimately get stuck. So XP uses a process of continuous design improvement called Refactoring, from the title of Martin Fowler's book, "Refactoring: Improving the Design of Existing Code".

The refactoring process focuses on removal of duplication (a sure sign of poor design), and on increasing the "cohesion" of the code, while lowering the "coupling". High cohesion and low coupling have been recognized as the hallmarks of well-designed code for at least thirty years. The result is that XP teams start with a good, simple design, and always have a good, simple design for the software. This lets them sustain their development speed, and in fact generally increase speed as the project goes forward.

Refactoring is, of course, strongly supported by comprehensive testing to be sure that as the design evolves, nothing is broken. Thus the customer tests and programmer tests are a critical enabling factor. The XP practices support each other: they are stronger together than separately.

Continuous Integration

Extreme Programming teams keep the system fully integrated at all times. We say that daily builds are for wimps: XP teams build multiple times per day. (One XP team of forty people builds at least eight or ten times per day!)

The benefit of this practice can be seen by thinking back on projects you may have heard about (or even been a part of) where the build process was weekly or less frequently, and usually led to "integration hell", where everything broke and no one knew why.

Infrequent integration leads to serious problems on a software project. First of all, although integration is critical to shipping good working code, the team is not practiced at it, and often it is delegated to people who are not familiar with the whole system. Second, infrequently integrated code is often -- I would say usually -- buggy code. Problems creep in at integration time that are not detected by any of the testing that takes place on an unintegrated system. Third, weak integration process leads to long code freezes. Code freezes mean that you have long time periods when the programmers could be working on important shippable features, but that those features must be held back. This weakens your position in the market, or with your end users.

Collective Code Ownership

On an Extreme Programming project, any pair of programmers can improve any code at any time. This means that all code gets the benefit of many people's attention, which increases code quality and reduces defects. There is another important benefit as well: when code is owned by individuals, required features are often put in the wrong place, as one programmer discovers that he needs a feature somewhere in code that he does not own. The owner is too busy to do it, so the programmer puts the feature in his own code, where it does not belong. This leads to ugly, hard-to-maintain code, full of duplication and with low (bad) cohesion.

Collective ownership could be a problem if people worked blindly on code they did not understand. XP avoids these problems through two key techniques: the programmer tests catch mistakes, and pair programming means that the best way to work on unfamiliar code is to pair with the expert. In addition to ensuring good modifications when needed, this practice spreads knowledge throughout the team.

Coding Standard

XP teams follow a common coding standard, so that all the code in the system looks as if it was written by a single -- very competent -- individual. The specifics of the standard are not important: what is important is that all the code looks familiar, in support of collective ownership.

Metaphor

Extreme Programming teams develop a common vision of how the program works, which we call the "metaphor". At its best, the metaphor is a simple evocative description of how the program works, such as "this program works like a hive of bees, going out for pollen and bringing it back to the hive" as a description for an agent-based information retrieval system.

Sometimes a sufficiently poetic metaphor does not arise. In any case, with or without vivid imagery, XP teams use a common system of names to be sure that everyone understands how the system works and where to look to find the functionality you're looking for, or to find the right place to put the functionality you're about to add.

Sustainable Pace

Extreme Programming teams are in it for the long term. They work hard, and at a pace that can be sustained indefinitely. This means that they work overtime when it is effective, and that they normally work in such a way as to maximize productivity week in and week out. It's pretty well understood these days that death march projects are neither productive nor produce quality software. XP teams are in it to win, not to die.

Conclusion

Extreme Programming is a discipline of software development based on values of simplicity, communication, feedback, and courage. It works by bringing the whole team together in the presence of simple practices, with enough feedback to enable the team to see where they are and to tune the practices to their unique situation.

A subsequent article will discuss the common questions and variations within XP. If you have concerns, please write and we'll try to include them in the followup article.

[And Ralph Johnson said that in XP the software life-cycle is: Analysis, Test, Code, Design.]

Ronald E. Jeffries, XProgramming.com

Don Wells, Extreme Programming: A gentle introduction

XP Developer

Many XP Discussions on WikiWikiWeb

Kent Beck, Extreme Programming Explained: Embrace Change

Kent Beck and Martin Fowler, Planning Extreme Programming

Ron Jeffries, Ann Anderson and Chet Hendrickson, Extreme Programming Installed

Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts, Refactoring: Improving the Design of Existing Code

Waterfall Model

The Standard Waterfall Model for Systems Development

The standard waterfall model for systems development is an approach that goes through the following steps:

  1. Document System Concept
  2. Identify System Requirements and Analyze Them
  3. Break the System into Pieces (Architectural Design)
  4. Design Each Piece (Detailed Design)
  5. Code the System Components and Test Them Individually (Coding, Debugging, and Unit Testing)
  6. Integrate the Pieces and Test the System (System Testing)
  7. Deploy the System and Operate It

This model is widely used on large government systems, particularly by the Department of Defense (DOD).

As part of this standard approach, the party responsible for contracting out the system development (ESDIS for the ECS Contract) can call on a number of tools to help plan and document the system. ECS followed this planning approach, which means that early in the system development, ESDIS set up a standard set of documents for the contractor to supply, as well as a contractual schedule for the major pieces. The development process provided a number of design reviews, notably

Until these reviews were completed, there would be little code developed. After the CDR, the contractor would code to the design.

The standard reference for estimating the cost of the system is the COnstructive COst MOdel (COCOMO) developed by Dr. Barry Boehm while he was at TRW [Boehm, B., 1981: Software Engineering Economics, Prentice-Hall]. This model relates the development time and workforce [man-months] to the "Source Lines of Code" (SLOC). Roughly, for an ECS type of system ECS, the workforce (and therefore cost) scales as the cube of the development time. There are simple versions of the model and much more complex ones. Generally, all of the relationships used to predict these relationships are statistical in nature: Dr. Boehm and other workers in software project cost estimation build a database of project schedules and costs and then regress those against SLOC estimates. The most recent version of Dr. Boehm's work is provided in [Boehm, B., et al., 2000: Software Cost Estimation with COCOMO II, Prentice-Hall.].

There have been a number of criticisms of the standard waterfall model, including

The standard waterfall model is associated with the failure or cancellation of a number of large systems. It can also be very expensive. As a result, the software development community has experimented with a number of alternative approaches, including

These are discussed in considerable detail in [McConnell, S., 1996: Rapid Development, Taming Wild Software Schedules, Microsoft Press]. Commercial software projects often reduce the formality of the full waterfall model. In the last few years, a paradigm known as eXtreme Programming has emerged that emphasizes reducing the cost of software changes, developing test cases before coding, developing code using pairs of programmers, and putting most of the documentation into the code [Beck, K., 2000: Extreme Programming Explained, Embrace Change, Addison-Wesley].

SENG611 Software Life Cycles

Waterfall
Summary:
In this type of development, each of the stages (requirements, design, development, and test) are separate and do not overlap. Each stage has a set of exit criteria which must be met before the next stage can begin. Once a stage has been entered, changes to a previous phase should not happen, but if they are required, the changes tightly controlled.
Pros:
  • Very straight-forward.
  • Project can move quickly to the implementation phase.
Cons:
  • Project may miss functionality if not all requirements were captured in the requirements stage.
  • Requires a very specific description of requirements and very little volatility in requirements.
  • Bugs are expensive to fix and new requirements are expensive to incorporate.
  • Complete testing is not possible, as unit testing is not a taken into consideration in this model.
Critique:

(go to top)


V-Shaped
Summary:
The V-Shaped model is the same as the Waterfall model except that testing is a consideration throughout the development. Each stage of development is matched with its equivalent stage in testing: Requirements ó System testing, High-level design ó Integration testing, Detailed design ó Unit testing. Looking at how each stage applies to the testing of that stage might affect the way that the particular phase is approached and/or documented.
Pros:
  • Not quite as straight forward as the waterfall method, but quite straight forward none the less.
  • Allows for more extensive testing, as the testing is built into each phase of development, rather than as an afterthought (allows for more than black box testing).
  • More bugs are caught during development than are caught using the waterfall method.
  • Project can move fairly quickly to implementation stage.
Cons:
  • Requires a very specific description of requirements and very little volatility in requirements.
  • Bugs are expensive to fix in the final product, although more bugs are caught during development than are caught using the waterfall method.
  • Total development time is longer than the waterfall method.
Critique:

(go to top)


Prototyping
Summary:
Prototyping consists of developing a partial implementation of the system to give the users a feel for what the developer has in mind. The users then give feedback on what they think of the prototype - what works and what doesn't - and the developer can make changes more easily and efficiently than if the changes were to be made later on in development.
Pros:
  • May take longer to develop using this method because of the long process of developing prototypes which may be radically altered or thrown-away.
  • Requires a pretty good knowledge of the problem domain in order to create a prototype in the first place.
Cons:
  • Since it takes longer to get to the implementation stage, not all project resources are needed at the beginning of the project.
  • Allows for less understanding of the overall requirements and for requirements volitility since the users can evaluate and modify the prototype before the final product is produced.
Critique:

(go to top)


Incremental
Summary:
In an Incremental development, the system is developed in different stages, with each stage consisting of requirements, design, development, and test phases. In each stage, new functionality is added. This type of development allows the user to see a functional product very quickly and allows the user to impact what changes are included in subsequent releases.
Pros:
  • Since it takes longer to get to the implementation stage, not all project resources are needed at the beginning of the project.
  • Allows for a very complex project with incomplete initial understanding of requirements since development is done in small, incremental phases where each phase consists of requirements, design, implementation and test.
Cons:
  • There must be little requirements volitility because it is expensive to go back and redesign something that has already been tested in a previous increment.
Critique:

(go to top)


Spiral
Summary:
The Spiral model of development is risk-oriented. Each spiral addresses a set of major risks that have been identified. Each spiral consists of: determining objectives, alternatives, and constraints, identifying and resolving risks, evaluating alternatives, developing deliverables, planning the next iteration, and committing to an approach to the next iteration. (Barry Boehm, "A Spiral Model of Software Development and Enhancement", Computer, May 1988)
Pros:
  • Since it takes longer to get to the implementation stage, not all project resources are needed at the beginning of the project.
  • Allows for a very complex project with incomplete initial understanding of requirements since development is done in small, spiral phases where each phase consists of requirements, risk analysis, and design.
  • Allows for high requirements volitility.
Cons:
  • Requires good knowledge of the problem domain.
Critique:

(go to top)


Humor

Q: What do you get when you combine the waterfall model with the spiral approach?

A: The flush model.

On a more serious note, the waterfall model does work for certain problem domains, notably those where the requirements are well understood in advance and unlikely to change significantly over the course of development, and where reliability of the final product is critical. This assumes you have the resources (time and people) to do it properly -- and this is why the waterfall method has gotten a bad reputation -- it's been applied where there is insufficient time/resources or the requirements aren't well understood.

Most business applications, for example. The waterfull method is wonderfully suited to something like spacecraft control software, where the spiral approach (we called it "stepwise refinement" twenty-five years ago when I was in college -- there's nothing new) just wouldn't work. But businesses are both constantly changing and adaptable -- a business app that only implements half the requested features is probably still more useful than not, where as software to control complex hardware that's only half done is nearly useless (except that some testing can be done, perhaps).


[ Parent ] XP Does Work on Spacecraft Control Software (Score:1)
by shanelenagh (6421) on Tuesday October 07, @02:20PM (#7155736)
http://www.armadilloaerospace.com/ [armadilloaerospace.com] [ Parent ] XP wouldn't meet DO-178 requirements... (Score:1)
by Svartalf (2997) on Tuesday October 07, @03:08PM (#7156214)
(http://svartalf.freeshell.org/)

Which is required for any piece of aviation (including spacecraft) flight control system that is not on an experimental plane.

It doesn't make for the stringent design specification requirements, let alone a few others.

I definitely do NOT want to be flying on a plane or having one fly over my head that doesn't meet the DO-178 certification.

[ Parent ] Re:XP Does Work on Spacecraft Control Software NOT (Score:2)
by AJWM (19027) on Tuesday October 07, @04:34PM (#7157153)
(http://www.ajwm.net/amayer/) That's a rocket, not a spacecraft. It's in the air for a few minutes, at most -- not on say a multiyear trip to the outer solar system or even a multiyear stay in GEO.

Plenty of rockets do just fine with no software at all, I've launched a few of same myself.
[ Parent ] Re:Absolutely Hopeless and Clueless (Score:2)
by hawkestein (41151) on Tuesday October 07, @07:03PM (#7158463) ...the spiral approach (we called it "stepwise refinement" twenty-five years ago when I was in college -- there's nothing new)...

The spiral approach and stepwise refinement aren't exactly the same thing, although they are both iterative approaches. As I recall, stepwise refinement is a top-down design technique. You basically start by implementing the top-level of the system, and you put in stubs for the lower level. Then, you gradually work your down the abstraction ladder, implementing the lower level components, until you're all done.

On the other hand, the spiral approach is really all about risk management. At each iteration, you're supposed to identify and address the most significant risks in the project. This doesn't necessarily mean you're using a top-down approach, although it might.


Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: July 03, 2020