|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better
``I wisdom dwell with prudence,
and find out knowledge of witty inventions.''
-- Proverbs 8:12
|News||Recommended Books||Recommended Links||Tutorials||Lectures and courses|
As 64-Bit CPUs Alpha, SPARC, MIPS, and POWER aptly put it:
In a time when microprocessors are advertised on TV and CPU vendors have their own jingles, the fantastic technology embodied in these chips seems almost irrelevant. And it nearly is -- microprocessor marketing is drawing ever nearer to perfume advertising.
All the emphasis is on packaging and marketing, branding and pricing, channels and distribution, with little left over for solid product details, features, and benefits. Little old ladies who don't know a transistor from a tarantula know the name "Pentium" and think they want "HyperThreading." It's a good thing that for some of us, the technology still matters.
Also please don't be glamorized with the performance. Performance, which common folks associate with clock rate in MHz is a tricky thing to measure and to use and it's not all that matters in chips design. Often price/performance ration is as important or more important. Moreover, ever for raw performance, benchmarks like SPECCPU are more relevant the clock speed. Please remember about so called "MHz myth". This is the battle that AMD used to fight in late 90th: they were spending big bucks trying to remind people that just because that P4 is running at 3GHz, it doesn't mean that it is faster than a 2.2GHz Athlon.
It's irrelevant how many times per second the chips clock says "tic-tac", what matters is how fast real chips can get real jobs done. For real-world purposes, you can compare the best (i.e., the fastest chips) or the most valuable (i.e., the ones with the best speed/price ratio). To use a car metaphor (that most people seem to understand), not everyone needs or wants to drive a Lamborghini. It's expensive, it's hard to park, it's hard to drive, it's cramped and it drinks gas like there is no tomorrow. Most people are better off with a "normal" car, that's fast enough and powerful enough for them, is easy to drive, and has room for the kids and the dog.
There are several classic real architectures like System 3xx, SPARC, MIPS, VAX-11, PowerPC, Intel x86 and Itanium.
But real CPUs (other then MIPS) are always a little way to complex to study. Emulators of some ideal "teaching" architectures
makes a pleasant change from real system in the same way as miniOSes are a real pleasure to work with in comparison with
systems with a gigabyte minimum memories requirements and kernels that no single human being can comprehend :-). Also
CPU manufacturers always leaves some things out, but in an emulator you can try to correct their mistakes ;-).
There are also two important "ideal" machines created by highly respected authors:
MMIX operates primarily on 64-bit words. It has 256 general-purpose 64-bit registers (too many for my taste ;-) that can hold either fixed-point or floating-point numbers. Instructions have 4-byte form `OP X Y Z', where each of OP, X, Y, and Z is a single 8-bit byte. If OP is an ADD instruction, for example, the meaning is ``X=Y+Z''; i.e., "Set register X to the contents of register Y plus the contents of register Z.'' It has only 256 possible OP codes that fall into a dozen or so categories.
Again, many people, including myself, think that 256 registers are "too many" and 16 like in good old System 360 is too few :-). Section 42 describes a limited version of MMIX with only 32 local registers plus 32 global registers (plus the 32 special registers), that is binary compatible with most MMIX programs.The cycle counter, the interval counter, and the usage counter might by very useful for profiling.
It is nice that "DIV" calculates both the quotient and the remainder, even though those 4 numbers (2 input, 2 output) don't fit in the general 2 input, 1 output scheme. I don't like special-purpose registers, but I hesitate to criticize the plethora of "special registers" in MMIXSubroutine linkage is, at first glance, unnecessarily complicated.
Well, Power architecture is increasingly going head-on against Itanium in many large deals, even sinking the good ship Itanic in some situations with - believe or not - lower prices! And improved performance with better compilers, more superdense high-bandwidth machine like the superb p655+, where two 8-way single MCM systems with 1.7+ GHz POWER4+ processors fit within a 4U space! So, 16 systems and some 880+ GFLOPs of peak 64-bit power get squeezed into a single rack - 4 times the density of HP Itanium2! Put a nice shared-memory interconnect like the increasingly popular Bristol product, Quadrics QsNet, and you got a nasty supercomputing monster.
And, these can run 64-bit Linux (almost) as well as their home OS, AIX.
The memory bandwidth of each eight-way box is 51.2 GB/s, or eight times that of a four-way Intel Itanium2, or 11 times that of a four-way Sun USIII box. Of course, Rmax (the obtainable percentage of FLOPS in Linpack FP bench) is right now far less on Power4 than on Itanium2 - 60% vs almost 90% - but the extra frequency and greater memory bandwidth more than make up for that in many apps.
Towards the end of the year, the multithreaded POWER5 will also dramatically improve the FP benchmark scores, not to mention twice the CPU density, a quarter larger cache, even higher memory bandwidth and lower latency. But don't expect major clock speed improvements, the focus was on real performance and reliability benefits - as if chip-kill memory, eLiza self-healing, and per-CPU logical partitioning was not enough...
Finally, the existing SuSE and coming RedHat Linux on POWER4 and its follow-ons, natively 64-bit of course, aim to give extra legitimacy to it being "an open platform" at least as much as Itanium is.
On the low-end, the PowerPC 970 - or POWER4 Lite, might (or pretty much will be now that Motorola G5 is down the drain) the basis of Apple's next generation Mac platform - it's 64-bit ticket to the future. With its low power - down to less than 16W in low-power mobile 1.2 GHz mode, it will also enable very dense server blades and of course POWERful 64-bit ThinkPads or PowerBooks running AIX, Linux or MacOS...
For IBM then, Opteron makes sense as an excellent tool to corner Intel, with POWER on high end and Opteron on low-end, both 64-bit and both soon manufactured by IBM Microelectronics? No, I didn't say both owned by IBM, even though that is a possibility: AMD does need a sugar daddy, not a sugar mommy. Got my hint who the feisty "sugar mommy" could be?
What about the other major vendor, from SUN-ny California? Well, UltraSPARCIIIi is finally out, no surprise there, it helps a bit but is still far behind all other major CPUs (except MIPS) in most benchmarks. Yes, Sun's mantra of something like "we don't care about speed, we focus on our brand etc" can continue, but what is computing if not about speed and performance?
Still no sign of US IV anyway, and even when it comes, don't expect much of extra per-thread performance over US III - When (and if) it really rolls out in volumes towards yearend, it will have to fight both POWER5 and Madison2, both very powerful beasts on the rise, backed by humungous ruthless megacompanies - each of which can eat Sun as an appetiser.
You can read hundreds of pages of Net discussions about the particular merits and demerits of SPARC vs other architectures, from all sides and viewpoints, but the fact remains - SPARC is the turtle of the 64-bit world, slow and maybe long-lived compared to, say, Alpha, but even turtles have to die at some point... and before they die, they become extremely slow...
64-bit Opteron is fast in some things compared to the rest of the gang, and not so fast in others, but whatever the case, current and future Opterons are vastly superior performance and feature wise to low-end and midrange SPARC offerings at umpteen times lower cost. Plus, they are as 64-bit as SPARC (or any other 64-bit CPU) is... so Sun taking Opteron would be simply common sense...
THIS ARTICLE hopes to cast some light on why 64-bit addressing, that is, the native mode of the Opteron or Itanium versus that of the Athlon or Pentium is important in 2003. It also attempts to address what the requirements are and - equally importantly - are not.
Before we start, an easy one. Why 64-bit and not 48-bit? Because it costs little more to extend a 32-bit ISA to 64-bit than to only 48-bit, and most people like powers of two. In practice, many of the hardware and operating system interfaces will be less than 64 bits, sometimes as few as 40 bits, but the application interfaces (i.e. the ones the programmers and users will see) will all be 64-bit.
There are several non-reasons quoted on the Internet; one is as arithmetic performance. 64-bit addressing does not change floating-point, and is associated with 64-bit integer arithmetic; while it is easy to implement 32-bit addressing with 64-bit arithmetic or vice versa, current designs don't. Obviously 64-bit makes arithmetic on large integers faster, but who cares? Well, the answer turns out to be anyone who uses RSA-style public key cryptography, such as SSH/SSL, and almost nobody else.
On closer inspection, such use is dominated by one operation (NxN->2N multiplication), and that is embedded in a very small number of places, usually specialist library functions. While moving from 32 to 64 bits does speed this up, it doesn't help any more than adding a special instruction to SSE2 would. Or any less, for that matter. So faster arithmetic is a nice bonus, but not a reason for the change.
File pointers are integers, so you can access only 4GB files with 32 bits, right? Wrong. File pointers are structures on many systems, and files of more than than 4GB have been supported for years on a good many 32-bit systems. Operations on file pointers are usually well localised and are normally just addition, subtraction and comparison anyway. Yes, going to 64-bits makes handling large files in some programs a bit easier, but it isn't critical.
Let's consider the most common argument against 64-bit: compatibility.
Almost all RISC/Unix systems support old 32-bit applications on 64-bit systems, as did IBM on MVS/ESA, and there is a lot of experience on how to do it almost painlessly for users and even programmers.
Microsoft has a slightly harder time because of its less clean interfaces, but it is a solved problem and has been for several decades.
Now let's get onto some better arguments for 64-bit. One is that more than 4GB of physical memory is needed to support many active, large processes and memory map many, large files - without paging the system to death. This is true, but it is not a good argument for 64-bit addressing. The technique that Intel and Microsoft call PAE (Physical Address Extension) allows 64 GB of physical memory but each process can address only 4GB. For most sites in 2003, 64GB is enough to be getting on with.
IBM used this technique in MVS, and it worked very well indeed for transaction servers, interactive workloads, databases, file servers and so on. Most memory mapped file interfaces have the concept of a window on the file that is mapped into the process's address space - PAE can reduce the cost of a window remapping from that of a disk transfer (milliseconds) to that of a simple system call (microseconds). So this is a rather weak reason for going to 64-bit addressing, though it is a good one for getting away from simple 32-bit.
Now, let's consider the second most common argument against 64-bit: efficiency. Doubling the space needed for pointers increases the cache size and bandwidth requirements, but misses the point that efficiency is nowadays limited far more by latency than bandwidth, and the latency is the same. Yes, there was a time when the extra space needed for 64-bit addresses was a major matter, but that time is past, except perhaps for embedded systems.
So 64-bit addressing is unnecessary but harmless except on supercomputers? Well, not quite. There are some good reasons, but they are not the ones usually quoted on the Internet or in marketing blurb.
The first requirement is for supporting shared memory applications (using, say, OpenMP or POSIX threads) on medium or large shared memory systems. For example, a Web or database server might run 256 threads on 16 CPUs and 32GB. This wouldn't be a problem if each thread had its own memory, but the whole point of the shared memory programming model is that every thread can access all of the program's global data. So each thread needs to be able to access, say, 16GB - which means that 32-bit is just not enough.
A more subtle point concerns memory layout. An application that needs 3GB of workspace might need it on the stack, on the main heap (data segment), in a shared memory segment or in memory set aside for I/O buffers. The problem is that the location of those various areas is often fixed when the program is loaded, so the user will have to specify them carefully in 32-bit systems to ensure that there is enough free space in the right segment for when the program needs its 3GB.
Unfortunately, this choice of where to put the data is often made by the compiler or library, and it is not always easy to find out what they do. Also, consider the problem of an administrator tuning a system for multiple programs with different characteristics. Perhaps worst is the case of a large application that works in phases, each of which may want 2GB in a different place, though it never needs more than 3 GB at any one time. 64-bit eliminates this problem.
To put the above into a commercial perspective, almost all general purpose computer vendors make most of their profit (as distinct from turnover) by selling servers and not workstations. 64-bit addressing has been critical for some users of large servers for several years now, and has been beneficial to most of them. In 2003, 64-bit is needed by some users of medium sized servers and useful to most; by 2005, that statement could be changed to say `small' instead of 'medium sized'. That is why all of the mainframe and RISC/Unix vendors moved to 64-bit addressing some time ago, and that is why Intel and AMD are following.
On the other hand, if you are interested primarily in ordinary, single user workstations, what does 64-bit addressing give you today? The answer is precious little. The needs of workstations have nothing to do with the matter, and the move to 64-bit is being driven by server requirements. µ
Nick Maclaren has extensive experience of computing platforms
Linux.com - Ultra DMA 66. See also
WWW Computer Architecture Home Page[July 7, 1999]
|Plan for the semester; Review: Key abstractions||ppt, pdf||P&H Ch1.
IBM 360, B5000
|Review: Instruction Set Arch, CISC/RISC, pipeline, hazards, comp opt.||ppt, pdf||P&H Ch 2
|360 stack debate / Prereq quiz|
||philipb||P&H 5.8-9, 5.14-15||asmt 2|
|Case Study: Network Embedded Architecture||ppt,
|Latency Tolerance, Multithreading||notes||MT arch, MT anal, horizon|
|Meiko CS-2, Paragon, P&H 8.1-7|
|Low-Power Design||notes||Low-power CMOS, PicoRadio, Variable-Voltage Core-Based Systems|
|*Configurable* Architecture||Kurt Keutzer||Safari|
|Network Processor Arch||Chuck Narad||eetimes, iXP2800|
|Micro Architecture - Hazards, Dynamic sched||ppt,
||P&H Ch 3.1-3||asmt 3|
|Programming Pervasive Applns||Robert Grimm|
|Superscalar Processor Design: Techniques||John Shen||P&H Ch 3.4-15|
|BP + AR => ILP||ppt,
|PIII, PIV => SMT||P&H 3.10-12||asmt 4|
|Vector Processors/MM inst||ppt,
||Cray-1 Computer System, Russell (readings p 40-49 and intro p89)|
Dan Sorin @ 3:30
|Q&A in DSM and Availability||P&H 6.5-7||bring a question|
|Analysis of Perf, Results||basic stats,
Art of Computer System Perf. Analysis, Ch 2, 13
|CM2 vs RAW||The Raw Microprocessor|
|Error Detection, handling||SECDED on memory (hsiao,
disk (Blaum) &
CRC & tool
|Quantum Computing||Fred Chongpractical arch|
|Date||Lecture||Class Notes||PDF (2slides)||PDF (6 slides)|
|08/24||Introduction and overview of the course||lect01.ppt||lect01_2.pdf||lect01_6.pdf|
|08/29||Technology and application trends; performance evaluation||lect02.ppt||lect02_2.pdf||lect02_6.pdf|
|08/31||Quantitative principles of computer design||lect02.ppt|
|09/05||Instruction set architectures; memory addressing||isa.ppt||isa_2.pdf||isa_6.pdf|
|09/07||Role of compiler technologies||isa2.ppt||isa2_2.pdf||isa2_6.pdf|
|09/14||Introduction to pipelining||pipeline.ppt||pipeline_2.pdf||pipeline_6.pdf|
|09/21||Exceptions and FP pipelines||exceptions.ppt||exceptions_2.pdf||exceptions_6.pdf|
|09/26||Instruction level parallelism; dynamic scheduling||ilp.ppt||ilp_2.pdf||ilp_6.pdf|
|10/03||Compiler support for parallelism||ilp3.ppt|
|10/05||Hardware support for parallelism||ilp4.ppt|
|10/10||Studies of ILP and Midterm review||ilp4.ppt|
|10/19||Reducing cache miss rates||miss rate|
|10/24||Reducing cache miss penalty and hit time||cache_optimization|
|10/26||Main memory||main memory|
|10/31||Virtual memory||virtual memory|
|11/02||Storage devices and buses||storage devices|
|11/07||Metrics and performance of I/O||io-queuing|
|11/14||Unix file system||io-unixfs|
|11/21||Switching and commercial networks||switching|
|12/05||Final review and course evaluation||goodbye-CA|
Hardware fundamentals. Lecture notes, interactive tests, and links to related materials. By Brian Brown, Central Institute of Technology, New Zealand.
Google matched content
Our IP consists of the SPARC Instruction Set Architecture, SPARC trademarks, and, SPARC derivative TM's. The organization is funded entirely by our members in support of SPARC architecture and it's Open Standards technology.
SPARC International fosters innovation of SPARC by offering testing and branding programs, and, by promoting and protecting SPARC and SPARC-related brand names. The organization maintains this openly and cooperatively defined technology by using it's membership fees to ensure that SPARC maintains continuity with the industry standards of binary compatibility. It is this organizations sole responsibility to clarify these definitions which are made available for free download from our web site sparc.org/resource.htm. You do not have to be a member to download. However, becoming a member provides your company the choice to test and brand products based on the SPARC architecture.
Today, SPARC is one of the foundation architectures in the computer industry with SPARC trademarks registered in over 160 countries worldwide. SPARC architecture proved to be the most widely accepted technology for systems in the financial, academic, industrial, mission critical, and certainly the Internet market. SPARC is the logical choice for demanding, embedded applications simply because SPARC development environments streamlined the creation of complex designs on short time lines. With more than two million developers and over 30,000 applications, the SPARC community ranks among the world's largest while SPARC definitions continuously demonstrate the exceptional versatility of this open architecture.
We encourage inquiries from college professors and university students learning the SPARC architecture definitions. Contact email@example.com.
The LEON core is a SPARC* compatible integer unit developed for future space missions. It has been implemented as a highly configurable, synthesisable VHDL model. To promote the SPARC standard and enable development of system-on-a-chip (SOC) devices using SPARC cores, the European Space Agency is making the full source code freely available under the GNU LGPL license.
The LEON core has been extensively tested against the SPARC V8 architecture manual and the IEEE-P1754 (SPARC) standard, but have not been formally tested and certified by SPARC international as being SPARC V8 compliant. ...
DO YOU WANT TO HELP? If you wish to contribute to LEON and work on (or donate) any of these modules, please Jiri Gaisler.
The DLX tools consist of the following programs:
On the following URL, you'll find a DLX machine description file
for gcc 2.7.X:
Serial Communications -- from FreeBSD textbook
External Parallel Port devices and Linux - Links to External Parallel Port projects and documentation.
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: March 12, 2019