|
What do you expect when a frumious bandersnatch is involved?
Software Zen: delete this;
|
|
|
|
|
I need a break. Time to go back when things were simple (or at least appeared to be so) and do something just for fun. Like writing a little game for my old box. A little 8 bit processor, 4k RAM, a weird little graphics chip and the assembler is all you need.
But wait, this is tech from 1976! A graphics chip? Yep, we are racing the electron beam again. But they did that in such a clever way that a kid could get it to work. It involved interrupts and DMA and all code that had to stay in sync with the electron beam was contained in that less than 32 instructions long interrupt routine.
However, that simplicity still does not come without a price. The graphics chip issues 1024 x 60 DMA requests every second and also calls the interrupt routine 60 times a second as well. Whatever is going on in that interrupt routine adds up very quickly to take away a good percentage of the instructions the processor can 'waste' on such luxuries as executing its program every second.
Just how much, exactly? Those interrupt routines come in two flavors and we get two very different values. After all these years I have now taken the time to actually do the math:
The worst case are those interrupt routines that manipulate the DMA pointer to repeat every raster line two or more times. To do that, you have to stay in the interrupt routine for the entire duration of the frame, leaving only the vertical blank period for program execution. Just as bad as racing the beam always was. At least you had a more useful vertical resolution this way and required a significantly smaller graphics buffer. Still, this left you with only 33.64% of the CPU time for your program. Ouch.
The better option was not to race the beam at all. The interrupt routine merely reset the DMA pointer to the beginning of your graphics buffer for every frame and did not hang around any longer to repeat any scan lines. That left you with a weird resolution of 64 x 128 pixels and required a graphics buffer of 1024 bytes, but also left 71.63% of the CPU time for the actual program.
So, which option would you choose? Memory is not as much of an issue as it used to be, but I think I can live with a weird resolution and take the performance gain. By the way, the same old processor, unrestricted by the old graphics chip, gives me more than 12 times the instructions per second compared to that worst case I have been using for 45 years now. And I have not even really tried to overclock it yet.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I wouldn't necessarily limit myself to a single approach. I'd probably use the "better" graphics mode when I could afford it, and switch to the more expedient mode when I needed it.
I'd say first write the supporting code as much as possible to figure out what you'll need.
Check out my IoT graphics library here:
https://honeythecodewitch/gfx
|
|
|
|
|
Even in the old days I had a collection of subroutines to puzzle together whatever I need without rewriting everything all the time. Today I let the assembler do that dirty work for me. Just a tiny change in the configuration and I can have double buffering, sprites, a text mode and also change the resolution. The only thing I can't do is switch around these options at runtime. It would be possible, but then I would have to keep everything in memory at once and always reserve the largest buffers, just in case. Not a very economical use of the small amount of memory available.
But fear not, by slightly expanding the memory by a few megabytes and figuring out a way to switch memory pages without the processor noticing anything, I can keep lots of code in memory at once and do things that were far out of reach for a little 8 bit processor. The lessons I learned from the old computer: Use your memory as good as you can and there is no such thing as enough memory. I will always find a good use for a little more, even without being wasteful.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I've often developed full featured devices that operate using maybe 100kB even loading MIDI files and shooting the resulting MIDI messages over USB, + processing incoming MIDI.
I am fastidious about deciding when a round of development on a product is finished. I don't allow for feature creep and endless development on even my personal projects, so finding a use for a little more is not something I do regularly.
Now, that having been said, new rounds of development on newer project versions is totally fair game, and in those situations *sometimes* I find use for extra RAM, and just as often, I'm making it more efficient (often because a lot of my code is in reusable general purpose libs so efficiency improvements are par for the course even if a given project doesn't strictly require it)
Check out my IoT graphics library here:
https://honeythecodewitch/gfx
|
|
|
|
|
Talk about first world problems.
It's kind of easy to be virtuous as long as you have plenty. Having only the bare minimum may make you look a little stingy or greedy because you always have a good use for a little more.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I'm usually working with systems that have between 192kB and 512kB of RAM.
Check out my IoT graphics library here:
https://honeythecodewitch/gfx
|
|
|
|
|
Yes, times have changed. The old computer is from a time when even 16k was an expensive dream. Even any OS was a luxury. ROMs were just as tiny and there is only so much you can do with that limited space. You can't have drivers or routines for and against everything.
In a paged memory model you can pack your code into modules similar to DLLs. Each module gets its own memory page as if it were the only thing running on the computer. Sound familiar? It's just giving an old processor the same royal treatment as a modern one and suddently the whole computer becomes much more modern as it has any right to be. It's all about teaching a very old dog some new tricksand lack of memory is the most common reason that speak against doing that.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
We used to call those overlays.
Check out my IoT graphics library here:
https://honeythecodewitch/gfx
|
|
|
|
|
Not if you have something like a MMU that keeps the processor blissfully unaware that it actually is roaming around in paged memory and you can call anything at any time without having to fear any complications.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Aha! Well that's nice. Almost like virtual memory.
Check out my IoT graphics library here:
https://honeythecodewitch/gfx
|
|
|
|
|
It is virtually virtual memory.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Been there, did that. I implemented the Fuchs-Kedem-Naylor hidden surface removal algorithm(*) on a Z-80-based CP/M system. Displaying an image took a couple of overlay swaps, which required swapping 8" floppies.
(*) The same algorithm used in DOOM!
Software Zen: delete this;
|
|
|
|
|
CodeWraith wrote: there is no such thing as enough memory Back in the late 1980's I worked on an embedded project using a Z-80. The last six months of the work I spent refactoring code and adjusting buffers in the last 256 bytes of RAM available.
Software Zen: delete this;
|
|
|
|
|
You forgot the ".com" in your signature: "https://honeythecodewitch.com/gfx/"
|
|
|
|
|
I noticed right after I made it but CP wouldn't let my attempted edit stick at the time.
I just tried updating it again. This is a test.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
|
|
|
|
|
Congratulations! Your test worked.
|
|
|
|
|
many years ago a co-worker recounted his days as a game developer on i do not recall either an Atari 400 or Commode Door oops i mean Commodore 64 my own machine at the time . he was explaining his firm was the only to know how to draw a sprite across the raster line . i no longer recall what that means though i do recall his explanation which i will not reveal here unless requested though i would be surprised if the solution is not obvious to the many skillful here . upon end of story i made one of my few jokes which was well received especially by myself i.e. "When you can draw a sprite across the raster line you will have learned Grasshopper ." 
|
|
|
|
|
Sounds a lot like racing the beam. Just because you did not have to do that for everything anymore did not mean you could not use it to wring a few unusual effects out of your graphics hardware. Even then such things already were becoming arcane and secret knowlege.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
On C64, as long as you kept sprites vertically separated, you could have 8 sprites on the first “stripe”, interrupt! Switch sprite bank, repeat.
You would only be able to detect collisions within the “stripes”.
My favorite trick was to point a sprite to page/address 0000.
It gave a nice monitor where you could see the pixels of the system time on the screen.
|
|
|
|
|
Wordle 752 4/6
🟨🟨⬛⬛🟨
⬛🟨🟨⬛🟨
🟨🟩⬛🟨🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 752 5/6
🟨⬜⬜🟨⬜
⬜⬜🟨🟨🟨
🟨🟨🟨⬜⬜
🟩🟩🟨⬜🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 752 3/6*
⬜🟨🟨🟨⬜
🟨🟩⬜🟨🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 752 3/6
🟨⬜⬜⬜🟨
⬜⬜🟩🟩🟩
🟩🟩🟩🟩🟩
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Wordle 752 5/6
🟨⬜⬜⬜⬜
⬜🟨🟨⬜🟨
🟨🟩⬜🟨⬜
🟩🟩🟩⬜⬜
🟩🟩🟩🟩🟩
|
|
|
|