|
You can consider it a very simple OS, lacking a lot of the functions you expect today. Some primitive drivers are part of this "OS". These are mainly for getting the 'real' OS loaded into memory and give it control - often referred to as 'bootstrapping'. In principle, you could have the BIOS load your program rather than the real OS, but most likely, your program would request services that the BIOS doesn't provide.
In the old days (DOS), applications relied on the drivers that are part of the BIOS to handle the few devices that the BIOS have drivers for, such as keyboard and character based output to a screen or printer. Executing driver code out of the BIOS ROM was slow. PCs started to copy the BIOS code to RAM at startup, to run it faster. Often, the BIOS code was limited and couldn't utilize all the functions of the peripheral, so OSes started providing their own code to replace the BIOS drivers.
Today, the OS has its own drivers for 'everything', so those provided by the BIOS are used only for the bootstrapping process. Even though execution out of the BIOS ROM is slow, those drivers are used for such a brief time that it doesn't really matter much. I doubt that any modern motherboard would care to copy the BIOS drivers from ROM to RAM for speedup, the way they did in the old days. Note that in the old days, those drivers were used all the time; the OS didn't have a better replacement driver. So then it made much more sense to copy to RAM than it does today.
When you boot up, the OS isn't there yet, so you need something to read the disk, floppy, USB stick or whatever medium you keep you OS on. If your OS is on a medium for which your BIOS doesn't have a driver (say, a tape cassette), you may be lost - unless your BIOS has a driver for, say, a floppy drive, and you can load the tape station driver from the floppy, and use that driver (loaded to RAM) to load the real OS from the tape. (This is not very common, though.) We had USB for years before we got BIOSes with drivers for the USB interface. During those years, you could not boot your OS from a USB stick the way you can today. Even before that, we had the same issue with CD/DVD drivers: The BIOS didn't have CD drivers, so the CD/DVD drive was useless until the OS had been loaded with its CD drivers.
The mainboard battery: Flash is a more recent invention than the PC. In the old days, the data area used by the BIOS, holding e.g. the order in which to try the various boot devices, was held in what is called CMOS, an extremely low-power, but not very fast memory technology. Functionally, it was used like a flash is used today, but even if it drew almost no current, it was dependent on a certain voltage to keep the state intact. (The C in CMOS is for 'Complimentary', indicating two transistors blocking each other, none of them carrying any current to talk of. But if one of them lets go of its blocking, the house of cards falls down.) I would think that recent motherboards have replaced CMOS with flash, so they will not loose information when the battery is replaced.
The battery has a second function: The motherboard has a digital clock, running even when the power is turned off, the mains cable unplugged, and for a portable, the main battery is empty. This cannot be replaced by any battery-less function. If you have to replace the mainboard battery, expect the clock to be reset to zero. Even if the BIOS makes use of a flash for storing setup parameters, the battery is needed for the clock.
Sure, the BIOS uses the CPU. Or, I'd rather say it the other way around: The CPU uses the BIOS as its first program to run at startup. All CPUs, from the dawn of computers, fetches their first instruction from a fixed address (00000...000 is a viable candidate, but some CPUs differ). That is where you put the BIOS. The BIOS is the first instructions executed by the CPU. You could say that it is much like any other program. In principle, it could be written in any language, but its tasks are so down-to-physical hardware that it very often is written in assembly - at least the initial part of it, setting up the registers, interrupt system, memory management. When that is done, it may call routines written e.g. in C for things like handling the user dialog to set up the boot sequence, report the speed of the fans and all the other stuff that modern BIOSes do today. (Mainboards of today call their initialization code UEFI rather than BIOS, but their primary functions are the same.)
A computer doesn't have to have a BIOS. One of the first machines I programmed did not. When powered on, the PC register was set to 0 and the PC halted. The front panel had 16 switches; the instructions were 16 bit wide. So I flipped the switches to the value of the first instruction and pressed 'Deposit'. This stored the switch positions at address 0 and advanced the PC register to 1. I flipped switches for the next instruction; Deposit stored it at address 1 and advanced to address 2. The mini-driver for the paper tape reader was 15-20 instructions long. Consider that my "BIOS"! After flipping and depositing it, I placed a paper tape in the reader, containing the disk driver. Then I pressed the 'Reset' button, the PC register was reset to 0 and the CPU taken out of halt. The CPU ran the paper tape driver, which loaded the paper tape and at the end of the tape ran right into the code loaded, to run the disk driver to load the OS bootup code that loaded the rest of the OS.
Also, the computer doesn't have to have a built in clock running when the power was off; so it need no battery for that purpose. Most computers have one, but you have to set it after power on. Until the advent of PCs, most computers did not have a battery clock. E.g. after a fatal crash, the operator would have to restart and then set the time explicitly from his own watch.
There is a story about that from the University of Copenhagen - it must have been in the early 1970s: After a crash, the operator set the time and date, but didn't notice that he had typed the year wrong, ten years into the future. This wasn't noticed until after they had run the maintenance program deleting all files that hadn't been referenced for three months. (I guess that is when they noticed it!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Thank you for taking the time to reply tronderen, that’s an interesting post.
|
|
|
|
|
Quote: In the old days (DOS), applications relied on the drivers that are part of the BIOS
How did that function? I don't understand much about hardware or driver programming and I'm looking to broaden my horizons. Without a driver the CPU doesn't know the 'love language' of the equipment that sits in a slot. But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.?
Back in the old days (and even today I think, I'm not sure I've only had laptops in recent years) a slot like PCI matched hardware from different categories how did that work?
modified 29-Sep-24 15:28pm.
|
|
|
|
|
Back in the DOS days, video cards sat at a certain address. The BIOS didn't need any special drivers. It just wrote directly to the addresses the video RAM was at.
Back then, the bus and cards couldn't assign addresses, ports, DMA channels, and IRQs. You had to manually manage the separation of the hardware yourself. Then you told the drivers where the hardware sat in memory and/or how it was configured to listen on.
|
|
|
|
|
Calin Negru wrote: How did that function? The BIOS is really nothing but a function library. In the DOS days, you could in principle call, say, the driver for outputting a character on the serial line by calling the function directly, by its BIOS address. Well, not quite - the return mechanism wouldn't match you call, but we are close. Rather than a direct function call, you used the interrupt system to call the driver function.
You may think of physical interrupts, coming in on the INT or NMI (Non-Maskable Interrupt) line, as a hardware mechanism for calling a driver function when something, like an input, arrives from a device. Hardware will put the input value into a temporary register in the interface electronics (not a CPU register) and the driver function will move the value from that register into memory. Each interrupt source (device), or group of devices, provide an ID to the interrupt system so that a different function is called for each device (group), each knowing that specific device type and how to transfer the value from the interface electronics to the CPU. The interrupt system has a function address table with as many entries as there are possible interrupt IDs, so the ID is used to index the table. This table is commonly called an 'interrupt vector'.
All computers have at least one extra location in the interrupt vector that is not selected by a hardware device ID, but your software can use a specific instruction to make a function call to the BIOS, OS, Supervisor, Monitor, ... Whatever you call it. Intel calls it an INT(errupt) instructions, on other machines it may be called an SVC (SuperVisory Call), MON(itor) call, or similar. On some machines, the instruction may indicate the interrupt number (i.e. the index to be used in the vector), so that different service functions have different interrupt numbers. Others have a single 'software interrupt' number and vector entry, with a single handler that reads the desired service number from a register. Many machines started with giving each service a separate ID, but the number of services outgrew the interrupt vector, so they had to switch to the second method. DOS is a mix: A number of services have their own interrupt ID, but the majority of BIOS driver functions use INT 21, with a service selector in the AH register. (Other DOS multipurpose software interrupts are 0x10 for video functions, INT 13 for low-level disk functions, INT 16 for keyboard functions and INT 17 for printer functions.)
The primary function of an software interrupt call is that of an ordinary function call. But there is something extra: Privileged instructions are enabled, memory management registers are updated to bring OS code into the address space etc. This you cannot do by an ordinary function call. So an interrupt function is not ended with a plain return, but by a specific 'Return from interrupt' instruction that restores non-privileged mode, MMS registers etc. to 'user mode'.
DOS didn't have anything as fancy as 'privileged instructions' and MMS. So the main purpose of software interrupts were to make the application independent of, say, the location of the serial line handler. Regardless of BIOS version or vendor, to call the driver function for outputting a character on the console, you executed an INT 21 instruction with 2 in the AH register and the character code in the DL register. You may consider the BIOS specification similar to a high level language interface specification: It provided detail parameter and functional information, and the BIOS vendor provided the implementation of this interface.
Back in those days, interrupts were fast! I worked on machines designed in the mid 1970s: The first instruction of the handler routine was executing 900 ns (0.9 microseconds) after the signal occurred on the interrupt line. (For the 1970s, that was quite impressing!) Later, memory protection systems became magnitudes more complex, and getting all the flags and registers set up for an OS service has become a lot more expensive. Processors have long pipelines, and you have to (or at least should) empty them before going on to a service call. Software interrupts of today take a lot more time in terms of simple instructions execution times, compared to 50 years ago. When the 386 arrived with a really fancy call mechanism, all sorts of protections enforced, MS refused to use it in Windows - it was too slow. (They rather requested a speedup of Illegal Instruction interrupt handling, the fastest way they had discovered to enter privileged mode.) That is why Win32 programs never had access to 4 GiB address space: With the 386 call mechanism, user code and OS could have separate 4 GiB spaces, but MS decided that '2 GiB should be enough for everybody', so that they could use a faster interrupt mechanism that made no MMS updates.
But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.? There are lots of hardware standards for each class of hardware. The video card makers, or USB makers, or disk makers, sit down to agree on a common way to interface to the PC: They will all use this and that set of physical lines, signal levels, control codes etc. Then the driver on the PC side may be able to handle all sorts of VGA terminals, say. Or each video card vendor's interface on the PC bus, because they all use the same interface.
Over the years, such industry standards have grown from specifying the plug and voltages, and little else, to increasingly higher levels. USB and Bluetooth are primary examples where this is prominent: Very general 'abstract devices', such as a mass storage device, are defined by the interface, and the manufacturer on the device side must make his device appear as that abstract device, no matter its physical properties.
Furthermore: In the old days, we often for a few years had a multitude of alternatives, with highly device specific drivers before the vendors got together to clean up the mess. Nowadays, new technology (such as USB3, Bluetooth 5.0) tends to come from the very beginning with standards for use. Today's standards tend to be far more forward-looking than the old ones: E.g. they have open-ended sets of function codes, exchange lots of configuration values for bitrates, resolutions, voltages, ... so that the standard can live long and prosper. If the other part cannot handle the recent extensions, such as a higher resolution, it report is, and that extension isn't used on the connection.
Almost all general peripherals of today present themselves as one of those abstract devices defined for the physical interface. You still need a driver for each of those, but there aren't that many different ones. For special purpose equipment you still may have to provide a special driver, because it provides functions not covered by any of the standard abstract devices. If it uses a standard physical device, say USB, it hopefully uses that in a standard way so that you can use a standard USB driver and only have to write the upper levels of the driver yourself.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Another excellent response. Do you think it is worth consolidating all this into an artice?
|
|
|
|
|
Hello,
I use Arduino Uno to read the voltage change across a Thermistor terminals.
To read The temperature, I would use Steinhart–Hart equation:
I/T=A + B LnR + C (Ln R)^3 to convert voltage to temp, I can write this equation using C++ via Arduino IDE, then I'll get the Temperature.
My question is: how to do it without using the Arduino, I mean using only electronic components, what is the circuit design that can give me a Ln or cubic power?
Thank you
|
|
|
|
|
The short answer is to start with Log amplifier - Wikipedia[^]. You can assemble a bunch of them to do the trick, but that is really doing tings the hard way. For limited temperature spans, there are simpler approximations for linearizing to a reasonable accuracy. Feed any search engine with "linearize thermistor"
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Thank you, your answer is helpful
|
|
|
|
|
I’m trying to get a better understanding of how RAM memory works. I brought up this question before. This time I’m trying to find out a little bit more.
There is no optical fiber on the motherboard hence in the scenario where you want to place 32 bits in memory if you want to send them at one time you need 32 copper lines connecting the CPU socket to the memory slots. What happens if you want to send more information. Let’s say you want to send four 32 bit integers. Before sending the actual data does the CPU ( the Operating System) use the same 32 lines to instruct the memory slots where should the integers about to be sent be placed?
How does the memory know the address range in which it should place the four integers?
|
|
|
|
|
CPU sockets of today has a tremendous number of 'pins' (they aren't really pins nowadays, but the name sticks), typically 1200-1500. Usually, far more than 32 of these carry data to/from RAM. More typical is 128 or 256, the length of a cache line. If you want to modify anything less (such as a single byte or a 32 bit word), the CPU must fetch the entire cache line from RAM, modify the part of it that you want to modify, and write the entire cache line back to memory.
The CPU uses another set of pins to indicate the RAM address to be read from or written to. Since the arrival of 32 bit CPUs, the CPU has rarely been built to handle as much RAM as the logical address space; the 386 did not have 32 address lines, you could not build a 386 PC with 4 GB memory. Nor does today's 64 bit CPUs have 64 address lines. The memory management system will map ("condense", if you like) the used memory pages spread over the entire 64 bit address space, even multiple 64 bit spaces - one for each process, down to the number of address pins required to cover the amount of physical RAM that you have got.
Note that when transfers between RAM and the CPU cache (inside the CPU chip) goes in units of an entire cache line of, say, 128 bits or 16 bytes, there is no need to tell the RAM which of the 16 byte(s) are actually used - they are transferred anyway. So there is no need to spend address lines for the lowermost 4 bits of byte address. The number of external pins is a significant cost factor in the production of a chip, so saving 4 bits gives an economic advantage.
In the old days, pins were even costlier, and you could see CPUs that first sent the memory address out on a set of pins during the first clock cycle. The memory circuits latched this address, for use in the next clock cycle, when the CPU transferred the data value to be written on the same pins as those used for the address. Or reading: In the first cycle, the CPU presents the address it wants to read; in the next cycle, the RAM returns the data on the combined address/data lines. There were even designs where the address was too long to be transferred in a single piece: In cycle 1, the high address were transferred, in cycle 2 the low address, and in cycle 3, data was transferred. (And in those days, you fetched/wrote a single byte at a time, and cache was rarely seen.)
This obviously put a cap on the machine speed, when you could retrieve/save another data byte no faster than one every two or three clock cycles. To win the speed race, general processors today have separate, wide address and data buses. I guess that you still can see multiplexed address/data buses in embedded processors (ask Honey about that!).
Your scenario with four 32 bit words to be saved: If they are placed at consecutive logical addresses, as if they were a 16 byte array, they might happen to fit into the same cache line. When the cache logic determines that it is necessary to write it back to RAM, one address is set up on the address lines, and a single transfer is made on the data lines. If the 16 bytes is not aligned with the cache borders, but spans two cache lines, each of the two parts are written to memory at different times, in two distinct operations. If the four words are located in distinct, no-coherent virtual addresses, they are written back to RAM in 4 distinct operations: 4 addresses on the address bus, each with a different cache line contents on the data bus. Note that the entire cache line is written in each of the write operations, and could include updates to other values in the same lines that hadn't yet made it to RAM.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I get the picture, thank you
|
|
|
|
|
I have been graciously given PCB with LED's to monitor SOME serial data. It has LED for DRX and DTX.
My current serial data code only sends and I have no connection to any "remote serial device" , but I can see both DRX and DTX flashing. Good.
BUT
why is DRX flashing?
Is is because my "serial data communication" is set for "local loop back"?
How do I verify my "modem" settings" AKA "AT" commands ?
Thanks
|
|
|
|
|
jana_hus wrote: How do I verify my "modem" settings" AKA "AT" commands ?
Specific question gets a list of sites that can help you.
modem at commands - Google Search[^].
modified 10-Sep-24 16:14pm.
|
|
|
|
|
FROM https://e-junction.co.jp/share/Cat-1_AT_Commands_Manual_rev1.1.pdf:
Quote: 2.29. Controls the setting of eDRX parameters +CEDRXS
Syntax
Command Possible Responses(s)
+CEDRXS=[<mode>,[,<acttype>[,<requested_edrx_value>]]]
+CME ERROR: <err>
+CEDRXS? [+CEDRXS: <acttype>,<requested_edrx_value>[<cr><lf>
+CEDRXS: <act-type>,<requested_edrx_value>[...]]]
+CEDRXS=? +CEDRXS: (list of supported <mode>s),(list of supported
<act-type>s),(list of supported <requested_edrx_value>s)
Description
The set command controls the setting of the UEs eDRX parameters. The command controls whether
|
|
|
|
|
thanks for the reply.
I probably did not formulate my question correctly. I was not asking about AT commands.
I was trying to verify if some serial port parameters are being used to put the USB port itself into "loopback mode". Actually I am not sure if Linux serial port can use AT commands at all.
I guess I need to look if Linux has "default modem" anything.
|
|
|
|
|
Hi Jana,
Can you try these
DRX Flashing (Local Loopback)
DRX flashing could be due to local loopback. Check and disable it with:
stty -F /dev/ttyUSB0 -echo
Use minicom to send AT commands:
sudo apt-get install minicom
Open your serial port:
sudo minicom -D /dev/ttyUSB0
Type AT to check response (OK if working).
I hope this will work for you and resolve your issue.......
|
|
|
|
|
So, over the last 20+ years, I think I'm on my 4th laptop. I buy them for development, and I specifically look for expandability and reliability. I rarely toss them (I'm working with my therapist on this). I used to buy Dell, but they got squirrely with their consumer brands (xps1530 was excellent except for the motherboard graphics chip set). The last Dell I bought was a precision M4700 and it's a beast. I've moved on to Eluktronics - sort of a custom maker, but damn good hardware. Just watch out when they solder ram to the motherboard, but that's on me. Anyway, back to the 4700...
So, not the thinnest or lightest, but excellent display, great keyboard, add SATA SSDs and it will just go. But I had this quirky issue where it would blue screen and weird random times. This has been happening for over 5 years. Cleaning it up, it needed a new battery - $40 later done. no more complaints about charges, but boom blue screen. On reboot, it kept fussing about losing it's BIOS settings. Now, when any computer says this, the battery backup on the motherboard is dead. I had never thought of this. These are $2 from your grocery store. $5 if dealing with Dell's stupid stuff.
The laptop has been sitting there running for 2 weeks, all issues resolved.
When in doubt, go for the simple solution.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Okay, I would like ideas from people that are smarter than I am about my next great USB plan.
THE PROBLEM...
I find that I keep on having these mysterious behaviors Which cost me over an hour to locate, and it always turns out to be that the USB hub has just plain and simply worn out physically.
MY NEXT SCHEME...
- Build my own computer
- Install a specific USB Expansion Controller Card Adapter (The more ports, the better)
- When I buy that card adapter, I will buy two or three (identical) replacements for the future when its jacks wear out
- Use the same cheap hubs that wear out after a year or two, And plug them into a specific Jack on the controller card.
Put this all together, and I'm wondering if this will provide me a workable solution until the time that USB goes away and is replaced with the next Disco Baboon Technology of the future
|
|
|
|
|
C-P-User-3 wrote: I keep on having these mysterious behaviors Which cost me over an hour to locate, and it always turns out to be that the USB hub has just plain and simply worn out physically.
That description suggests this is not a hardware problem. Hubs should not really be breaking a lot.
Perhaps it is a usage problem.
Perhaps you throw your hubs, literally, into the back of a van when you go somewhere.
Maybe you are using the cable itself to pull out plugs. And at odd angles.
Maybe you are custom wiring something and the final product is a little rough around the edges.
At any rate for situations like that you should look at how you are treating the hub.
Alternatively you are running a business and you deliver systems and support them. And the hubs 'seem' to keep breaking.
Several possibilities there.
You have an employee that doesn't know what they are doing.
You should probably spend a bit more on the hubs that you buy. Cheap general means cheaper parts and construction.
Your customers are messing with something that probably shouldn't (see prior list for possibilities.)
One general solution if you do have something that you unplug and then plug back in a lot is to use a short extender cable. Plug that into the hub and leave it there. Then the other device is only plugged into the extender. If anything breaks then it is the extender.
And depending on the situation for the prior there are usb switches. Leave everything plugged in but use the switch to go back and forth.
|
|
|
|
|
as jschell replied, need more details especially the type of failures.
Over the past 20+ years, I have worked with usb hubs and have had 0 failures. I lost a couple, but that's a different issue. On my banker's/lawyer's desk, I have two USB hubs that have been double stick taped there for at least 10 years. One is USB 2.0, because for a long time I have had to support an Xp development environment. The other is USB 3.0. because I support a Window 10/11 environment that uses newer usb hardware. The only problem I have found is dealing with USB adapters - serial devices, ethernet, and I mix them up.
Now I build my own machines. The BS from OEMs and the shortcuts they take, I just don't do that anymore. Would I install something in my desktop? Based on my experience, no. I'd daisy chain to an external hub.
The one thing that I have found that drives me as a developer near insane is the stupidity of Microsoft. It's starting to creep into Unix, but we shall see. Microsoft decided to help save power, so there are default setting that turn off your USB devices. OS update? Let's turn it off. Wait the user explicitly said not to do that - meh, f' the user, climate change. And their goes my 6 month soak test.
I have 15 years of h/w - laptops - around me. Almost all of my cycles (insert/remove/insert) are on the laptops. No failures. This leads me to suspect that something else is going on.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
modified 27-Aug-24 8:28am.
|
|
|
|
|
Where can I get and download CodeProject version 2.0.8 for my Windows 7 computer with Blue Iris? I've read that this version works well on Win 7. I want to try this version because the newer versions do not install properly.
|
|
|
|
|
|
Greetings Kind Regards
I stupidly attempted to charge a hand held vacuum cleaner via one of the USB ports on my PC. Since then the screen has been blanking momentarily out periodically id est approximately 2-3m. Did I destroy this machine?
Thank You Kindly
|
|
|
|
|
I would doubt that is the cause of the problem. USB provides limited power. The device can't suck out anything else. But if it did then it would more likely be a problem with the computer (poor design) and now the computer has a problem.
You didn't mention how old the computer/monitor is. They do fail. I have a failed monitor sitting on the floor next to my desk. That specific brand fries capacitors every couple of years. So I just need to break out the soldering iron open it up and replace them.
|
|
|
|
|