I did learn on the Z80 but rarely venture into assembly now so would have little to contribute but I'd probably follow along with interest.
What sort of things would you discuss - Z80 has such poor performance compared with modern processors that surely no one would use one in a new project ?
That was a nice collection Andy and it's good to know it when to a good home.
In Australia we had much larger numbers of the Microbee than the Sinclair machines.
Would you believe the Z80 is still in production today?
I know it's still being made but so are spares for Ford Model T !
My question was about using one in a new project, is there any reason why one would ?
Ah but from the original company?
Yes, as a teaching device. The Z80 only consists of about 10,000 transistors hand laid without CAD.
But if these could be made using today's transistors (say 14nm) should yoctoscopic and draw next to no power!
That also raises the question, of can it be totally modelled in software and hence all cases proved out.
Am thinking of safety applications
3 of 3 people found this helpful
For teaching you are better off with the 6502, much nicer design (IMO), far fewer transistors, full VHDL/Verilog models available.
@Andy, probably far easier to "prove" a modern design like Cortex M0. In the dim and distant past "proveably correct " micros have been designed but they flopped totally.
The 6502 was used in the schools BBC Micro but strangely enough we never
looked at the individual transistors when I was at school.
We did at uni but not that processor.
I've used both of these micros and you're trying to compare apples and lychees.
Each micro has its own strengths and weaknesses.
The most important thing is whether you can accomplish your desired task quickly and easily.
VHDL, Verilog and programming models don't necessarily tell you the real story about what's inside the ICs!
To do that you have to look at the actual die designs.
These processors were aimed at different markets.
The Z80 was specifically designed to be backward compatible with the Intel 8080 so that it could immediately run a vast amount of software. About 60% of its die is dedicated for instruction decoding. It also has built in DRAM refresh support.
The 6502 used a different approach. It is a cheap clone of the 6800 and due to its lower price was used by many manufacturers.
The Apple ][ needed a plug in Z80 board to access the loads of Z80 CP/m software.
One obvious reason why the Z80 uses more transistors than a 6502 is because it has four times as many registers than the 6502) and that means lots more transistors are needed.
As for speed, did you know the Z80 can exchange the contents of two 16 bit registers in 4 T states. That's 1uS @ 4MHz.
Later versions of the Z80 can run up to 50MHz with single cycle execution - That's 20ns for the 16 bit register exchange instruction. Try that with a 6502!
(If a 4GHz version were built it'd be 250ps!)
As for stack size and flexibility the Z80 can have its stack located at any memory location and be of any size. The 6502 has a fixed size of 256 bytes at a fixed address.
As for provability NASA must of thought it cut the mustard because the Z80 was used on the Space Shuttle.
I'll raise your 50MHz ........
re provability - I told you no one cared - the VIPER was a total flop (and that's about how fast it was )
It's interesting that when people have built fast versions of old architectures it hasn't turned out to be good business - remember Scenix and the 100MHz PIC ?
Programmable logic doesn't count because it can be applied to both!