|
I remember that one fairly well. I had quite a bit of documentation on it. It was a multi-chip module as I recall, with three chips in the module. I remember that it was very, very innovative and completely unsuccessful. At least the Itanium was a little more successful than that one but not by much.
|
|
|
|
|
Things was dictated in those days by a lot of things... IBM picked Intel, because of price, availability, 8 bit support (to work with matured 8 bit equipment) and existing code-base... 68000 also got its part via Atari and Amiga, their success pushed the CPU too...
Z8000 was relatively slow, working on 16 bit with no future plans, and most importantly it had not the monetary background Intel and Motorola could provide...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
So you've re-invented EMS. Your design is superior because it's designed in the hardware from the beginning, while LIM EMS was a retro-fitted kludge.
Nice going!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's simpler. I don't have a real MMU and caches between the CPU and the memories, just a simple logic to extend the address lines.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
LIM EMS wasn't a "real MMU"; just a mechanism for enabling on of the boards plugged into the address bus and diabling the others within that same address range. All controlled by software.
At that time when LIM EMS was The Standard, I was truly fascinated by it. In 2010 I switched jobs and started programming a modern implementation of the 8051, with on-chip bank switching: The lower 48 Kbyte was fixed, for the upper 16K, four different banks could be switched in for a total of 112 Kbyte.
To be frank: I hated all the complications it lead to! Bank switching was one of the greatest hassles, but 8051 is a true 8 bitter, not 16. You had to be extremely careful with arithmetic operations, sign extension when mixing 8 and 16 bit entities etc. When we a couple of years later switched to ARM CPUs it was such a relief - even if started out with the M0. You'll never get me back to an 8 bit CPU again!
|
|
|
|
|
Member 7989122 wrote: I hated all the complications it lead to! Bank switching was one of the greatest hassles
I think I can avoid the worst of it for code, less so for data.
I have a smaller unswitchable RAM for the stack, so that's no problem. Then I have subroutines which call and return from subroutines. These routines see to it that parameters and registers are saved and retrieved from the stack. These routines will go into the ROM and not be switched away. By incorporating bank switching into them, the program is almost totally unaware that any subroutine it is just calling has been loaded into another memory page.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: This way the code will not be aware that it's running in paged memory. I can call anything at any time and the code will not notice anything of the bank switching.
How do you do that if a call instruction or stack pop of the PC is only a 16 bit register?
|
|
|
|
|
The processor does not have a fixed program counter, nor does it have instructions for calling or returning from a subroutine.
Instead, I can load an address into any of it's 16 registers and simply make it the current program counter to call a routine and simply make the previous register, which still points back to the last address where it left off, progrogram counter again to return.
That's the simplest technique. It does not involve any use of the stack at all. The stack, by the way, works in a similar fashion. I can load an address into any register at any time and make this register the current stack pointer.
Implementing a stack protocol for subroutines means writing two routines using this basic calling technique, one to call another routine and the other one to return. I will have to pass the address of the routine that is to be called and the parameters. Adding a further parameter for the memory page of the routine and doing the switching in the calling routine actually is very simple. The page of the calling routine is saved on the stack, along with the return address. Both are restored when returning.
Both the stack(s) and the routines for calling and returning must not be in the paged memory and then everything is well for the code and calling subroutines. For the beginning logical pages will be identical to the physical pages the code will be loaded to, to keep it simple. Later I may use an allocation table to automatically convert logical page numbers to physical page numbers.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Still looks exaggerated to me, but, very nice indeed.
|
|
|
|
|
It is, but the board is expensive enough to not waste any space. I must not fill the sockets for the memories to the brim. Even installing only one single memory IC would work, but the ICs are not that expensive anymore.
I also want to try my luck at implementing multitasking and simply assigning memory pages to tasks and switching them away as needed may be helpful.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
64k ought to be enough for anybody.
|
|
|
|
|
I love to point out the context of the original quote - about 640K. It turns out that most people who refer to the 640K does not know it.
The 8086 could address up to 1 MByte of physical RAM. The OS needs quite some space for keeping the significant parts of its code resident in RAM. Out of that 1 Mbyte, how large fraction should be reserved for the OS, drivers etc., and how much should be offered to the user programs?
Give 384K to the OS and drivers, 640K to user applications. 640K should be enough for anybody.
In that context, the remark makes perfect sensed. But of course it also takes the fun out of quoting it.
|
|
|
|
|
I don't think that is the correct context. He said it in 1981 and the PC was announced in August of 1981. The original, base-line configuration of the PC had no drives, 16K of RAM, and a cassette interface. This was in an era when most home enthusiasts used S100 bus systems and 64K was a lot of memory for those.
In fact, what put Microsoft on the map originally was their BASIC that ran on those S100 systems. FWIW, my second job out of school was at a company that made a robot and its controller used MS BASIC as its programming language and it was embedded in the ROM. They had printed out a complete listing of BASIC and it was a foot-high stack of paper. You could see Gates and Allen's names throughout the code.
|
|
|
|
|
He has himself given that explanation. Of course he may have made it up.
Yet: You present a different context, "64K was a lot of memory", which would make the reference to ten times as much memory even more reasonable (virtual memory on PCs was unknown at that time). But in that context, I find the statement rather unlikely. Frankly: It would make far less sense in that context. And if it was made in that context, it would be much easier to defend.
I have found no sources documenting it from 1981 - the earliest reference is 1985.
But that is the fun of undocumented quotes - they can be argued forever! And some day your laugh is stuck in your throat... Like the famous Thomas J Watson (IBM CEO for ages) about the world needing maybe five computers: If you today claim that five publicly available cloud offerings, or five diffent social networks, is sufficient, noone will laugh at you.
|
|
|
|
|
Interesting. I found many references to 1981. Here's a sample : Google[^]
Regardless, after thirty plus years, it can sometimes be difficult to find definitive references. Given my (foggy) memory, I can understand that. I used to have stacks and stacks of old EE Times and IEEE Transactions on Microprocessors but I had to unload them all three moves ago.
|
|
|
|
|
A computer will never need more than 64K and other truths!
|
|
|
|
|
64k ought to be enough for everybody?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Talking about 64K ... but make that 64K bits, please...
Around 1980, RAM chips grew from 16K bits to 64K bits. However, the 64K chips were badly plauged by cosmic alpha radiation, causing the microscopic (in those days) dynamic-RAM capacitors to discharge, causing a lot of bit errors. I worked on a 16 bit machine that had self-correcting memory: Each 16 bit word was protected by 6 error correcting bits.
For several years, people were fearing that we had reached the limit for RAM density, that the alpha radiation made it impossible to make denser chips, with smaller geometries.
After several years, it struck me that I hadn't heard those worries for a long time - and there were 256K RAM chips on the market. Until this day, noone has been able to tell me what had happened. How can we today make Gbit-size RAM chips that are not knocked out by alpha radiation? Are today's chips built with shield that stops alphaparticles? Or was that alpha-explanation wrong, and there was another, curable, reason for the random discharge of capacitors?
|
|
|
|
|
Member 7989122 wrote: Or was that alpha-explanation wrong, and there was another, curable, reason for the random discharge of capacitors? Not all silicon is created equal. In the old days you are talking about CMOS was just beginning to appear and not yet widely accepted.
The little processor I have been using all along must have been the first CMOS processor ever. That gave it some unique properties, like using very little power and giving it a higher radiation resistance. There even were special radiation hardened versions made in a special process, called silicon on sapphire.[^] These properties made it the first processor to be used in space.
Today practically everything is CMOS or a more advanced variant of CMOS, otherwise most devices would go up in flames because of their a thousand times higher power requirements. My best guess is, that higher radiation resistance was yet another reason why CMOS has 'won'.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
In 1980 I built myself a UK101 kit computer, with a 6502 processor. The motherboard had 8 pairs of slots of 1kb per pair (4 bit chips). This gave a grand total of 8Kb RAM. Pretty soon this was a limitation to what I wanted to do, so I created a solution; buy another 16 chips, bend up the "chip select" pin on each one through 90 degrees (carefully - do it too fast and the pin will snap) so it stuck out sideways; then (carefully) solder the remainging pins directly onto a memory chip in the socket below it. Carefully, as too much heat will wreck the chip. Then take a wire and connect the 16 "sticking-out" pins to the next pin-out of the main addressing chip, so that the "extra" chips occupy the next 8Kb of memory space.
I used the same technique to double the 1Kb of display memory to 2Kb, and with a couple of cuts on the motherboard and another jumper wire, doubled the video access rate to the memory and extended the address range. Each video character was now half the height it was previously, giving 32 rows of 64 characters instead of just 16.
Oh, and the 6502 (by the time I built my UK101) was quite capable of running at 2Mhz, twice the UK101's design of 1Mhz. Again, one cut of the motherboard and a jumper wire to the next pin-out of the main timer chip and hey presto - double the clockspeed. Did the same (with a rotary selection switch) for the RS232 output, speeding up tape cassette output from 300baud to 600 or 1200.
Never experienced any over-heat issues, but then it was running as a "naked" board with no case...
|
|
|
|
|
|
Does anyone have experience with W3.CSS? I'm considering using the framework for future web pages.
Gus Gustafson
|
|
|
|
|
Yes. It is, IMHO, the best responsive CSS framework to start with (until you do your own)...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
Thanks for the thought. In reviewing the specification, it appears that I will have to wean myself from <table>s and replace them with <div>s.
Gus Gustafson
|
|
|
|
|
gggustafson wrote: I will have to wean myself from <table>s and replace them with <div>s. I wonder about that.
Unless <table> is being deprecated, I would use the one that's most convenient and mostly easily controlled at the time. The table model is rather convenient for php generated output from database record sets. Very predictable rendering.
Rather than wean yourself away, just master both methods of handling the problem and use what you think is best.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|